Updates from: 08/22/2023 01:25:17
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Last updated 06/26/2023 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Manage Custom Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md
-+ Last updated 02/14/2020
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
The response from the API call looks similar to the following json:
{ "directorySizeQuota": { "used": 211802,
- "total": 300000
+ "total": 50000000
} } ]
If your tenant usage is higher that 80%, you can remove inactive users or reques
## Request increase directory quota size
-You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
+You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c
+ Last updated 01/29/2023 - # Known issues: Service principal alerts in Azure Active Directory Domain Services
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
Last updated 04/03/2023 --+ #Customer intent: As an identity administrator, I want to create an Azure AD Domain Services forest and one-way outbound trust from an Azure Active Directory Domain Services forest to an on-premises Active Directory Domain Services forest using Azure PowerShell to provide authentication and resource access between forests.- # Create an Azure Active Directory Domain Services forest trust to an on-premises domain using Azure PowerShell
For more conceptual information about forest types in Azure AD DS, see [How do f
[Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->
-[powershell-gallery]: https://www.powershellgallery.com/
+[powershell-gallery]: https://www.powershellgallery.com/
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Last updated 01/29/2023 --+ # Enable Azure Active Directory Domain Services using PowerShell
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Last updated 01/29/2023 -+ # Configure scoped synchronization from Azure AD to Azure Active Directory Domain Services using Azure AD PowerShell
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Last updated 01/29/2023 -+ # Harden an Azure Active Directory Domain Services managed domain
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef
+ Last updated 04/03/2023 - # How objects and credentials are synchronized in an Azure Active Directory Domain Services managed domain
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
-+ Last updated 06/01/2023
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b
+ Last updated 01/29/2023 - # Common errors and troubleshooting steps for Azure Active Directory Domain Services
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
+ Last updated 04/03/2023 - #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To see this managed domain in action, create and join a virtual machine to the d
[availability-zones]: ../reliability/availability-zones-overview.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus
-<!-- EXTERNAL LINKS -->
+<!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
+ Last updated 08/01/2023 - #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
Before you domain-join VMs and deploy applications that use the managed domain,
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
-[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
+[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Applications and systems that support customization of the attribute list includ
> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes). > [!NOTE]
-> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
+> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. Provisioning multi-valued directory extension attributes is not supported.
When you're editing the list of supported attributes, the following properties are provided:
active-directory Inbound Provisioning Api Configure App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-configure-app.md
If you're configuring inbound user provisioning to on-premises Active Directory,
## Create your API-driven provisioning app
-1. Log in to the [Microsoft Entra portal](<https://entra.microsoft.com>).
+1. Log in to the [Microsoft Entra admin center](<https://entra.microsoft.com>).
2. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 3. Click on **New application** to create a new provisioning application. [![Screenshot of Entra Admin Center.](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png)](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png#lightbox)
active-directory Inbound Provisioning Api Curl Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-curl-tutorial.md
## Verify processing of the bulk request payload
-1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run.
active-directory Inbound Provisioning Api Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-custom-attributes.md
You have configured API-driven provisioning app. You're provisioning app is succ
In this step, we'll add the two attributes "HireDate" and "JobCode" that are not part of the standard SCIM schema to the provisioning app and use them in the provisioning data flow.
-1. Log in to Microsoft Entra portal with application administrator role.
+1. Log in to Microsoft Entra admin center with application administrator role.
1. Go to **Enterprise applications** and open your API-driven provisioning app. 1. Open the **Provisioning** blade. 1. Click on the **Edit Provisioning** button.
In this step, we'll add the two attributes "HireDate" and "JobCode" that are not
1. **Save** your changes > [!NOTE]
-> If you'd like to add only a few additional attributes to the provisioning app, use Microsoft Entra Portal to extend the schema. If you'd like to add more custom attributes (let's say 20+ attributes), then we recommend using the [`UpdateSchema` mode of the CSV2SCIM PowerShell script](inbound-provisioning-api-powershell.md#extending-provisioning-job-schema) which automates the above manual process.
+> If you'd like to add only a few additional attributes to the provisioning app, use Microsoft Entra admin center to extend the schema. If you'd like to add more custom attributes (let's say 20+ attributes), then we recommend using the [`UpdateSchema` mode of the CSV2SCIM PowerShell script](inbound-provisioning-api-powershell.md#extending-provisioning-job-schema) which automates the above manual process.
## Step 2 - Map the custom attributes
active-directory Inbound Provisioning Api Grant Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md
Depending on how your API client authenticates with Azure AD, you can select bet
## Configure a service principal This configuration registers an app in Azure AD that represents the external API client and grants it permission to invoke the inbound provisioning API. The service principal client id and client secret can be used in the OAuth client credentials grant flow.
-1. Log in to Microsoft Entra portal (https://entra.microsoft.com) with global administrator or application administrator login credentials.
+1. Log in to Microsoft Entra admin center (https://entra.microsoft.com) with global administrator or application administrator login credentials.
1. Browse to **Azure Active Directory** -> **Applications** -> **App registrations**. 1. Click on the option **New registration**. 1. Provide an app name, select the default options, and click on **Register**.
active-directory Inbound Provisioning Api Graph Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-graph-explorer.md
This tutorial describes how you can quickly test [API-driven inbound provisionin
## Verify processing of bulk request payload
-You can verify the processing either from the Microsoft Entra portal or using Graph Explorer.
+You can verify the processing either from the Microsoft Entra admin center or using Graph Explorer.
-### Verify processing from Microsoft Entra portal
-1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+### Verify processing from Microsoft Entra admin center
+1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run.
active-directory Inbound Provisioning Api Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-postman.md
In this step, you'll configure the Postman app and invoke the API using the conf
If the API invocation is successful, you see the message `202 Accepted.` Under Headers, the **Location** attribute points to the provisioning logs API endpoint. ## Verify processing of bulk request payload
-You can verify the processing either from the Microsoft Entra portal or using Postman.
+You can verify the processing either from the Microsoft Entra admin center or using Postman.
-### Verify processing from Microsoft Entra portal
-1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
+### Verify processing from Microsoft Entra admin center
+1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials.
1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run.
active-directory Inbound Provisioning Api Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md
To illustrate the procedure, let's use the CSV file `Samples/csv-with-2-records.
This section explains how to send the generated bulk request payload to your inbound provisioning API endpoint.
-1. Log in to your Entra portal as *Application Administrator* or *Global Administrator*.
+1. Log in to your Microsoft Entra admin center as *Application Administrator* or *Global Administrator*.
1. Copy the `ServicePrincipalId` associated with your provisioning app from **Provisioning App** > **Properties** > **Object ID**. :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/object-id.png" alt-text="Screenshot of the Object ID." lightbox="./media/inbound-provisioning-api-powershell/object-id.png":::
This section explains how to send the generated bulk request payload to your inb
$ThumbPrint = $ClientCertificate.ThumbPrint ``` The generated certificate is stored **Current User\Personal\Certificates**. You can view it using the **Control Panel** -> **Manage user certificates** option.
-1. To associate this certificate with a valid service principal, log in to your Entra portal as *Application Administrator*.
+1. To associate this certificate with a valid service principal, log in to your Microsoft Entra admin center as *Application Administrator*.
1. Open [the service principal you configured](inbound-provisioning-api-grant-access.md#configure-a-service-principal) under **App Registrations**. 1. Copy the **Object ID** from the **Overview** blade. Use the value to replace the string `<AppObjectId>`. Copy the **Application (client) Id**. We will use it later and it is referenced as `<AppClientId>`. 1. Run the following command to upload your certificate to the registered service principal.
PS > CSV2SCIM.ps1 -Path <path-to-csv-file>
> [!NOTE] > The `AttributeMapping` and `ValidateAttributeMapping` command-line parameters refer to the mapping of CSV column attributes to the standard SCIM schema elements.
-It doesn't refer to the attribute mappings that you perform in the Entra portal provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes.
+It doesn't refer to the attribute mappings that you perform in the Microsoft Entra admin center provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes.
| Parameter | Description | Processing remarks | |-|-|--|
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
+ Last updated 10/20/2022
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 03/14/2023 Last updated : 08/14/2023
In Azure Active Directory (Azure AD), the term *app provisioning* refers to auto
Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more.
-Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. Your application must support [SCIM](https://aka.ms/scimoverview). Or, you must build a SCIM gateway to connect to your legacy application. If so, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support these applications as well.
-
-App provisioning lets you:
+Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. The table below provides a mapping of protocols to connectors supported.
+
+|Protocol |Connector|
+|--|--|
+| SCIM | [SCIM - SaaS](use-scim-to-provision-users-and-groups.md) <br />[SCIM - On-prem / Private network](./on-premises-scim-provisioning.md) |
+| LDAP | [LDAP](./on-premises-ldap-connector-configure.md)|
+| SQL | [SQL](./tutorial-ecma-sql-connector.md) |
+| REST | [Web Services](./on-premises-web-services-connector.md)|
+| SOAP | [Web Services](./on-premises-web-services-connector.md)|
+| Flat-file| [PowerShell](./on-premises-powershell-connector.md) |
+| Custom | [Custom ECMA connectors](./on-premises-custom-connector.md) <br /> [Connectors and gateways built by partners](./partner-driven-integrations.md)|
- **Automate provisioning**: Automatically create new accounts in the right systems for new people when they join your team or organization. - **Automate deprovisioning**: Automatically deactivate accounts in the right systems when people leave the team or organization.
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
+ Last updated 11/17/2022
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
+ Last updated 11/17/2022
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Azure Active Directory (Azure AD) Application Proxy has partnered with PingAcces
With PingAccess for Azure AD, you can give users access and single sign-on (SSO) to applications that use headers for authentication. Application Proxy treats these applications like any other, using Azure AD to authenticate access and then passing traffic through the connector service. PingAccess sits in front of the applications and translates the access token from Azure AD into a header. The application then receives the authentication in the format it can read.
-Your users wonΓÇÖt notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so theyΓÇÖll still balance loads automatically.
+Your users won't notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so they'll still balance loads automatically.
## How do I get access?
For more information, see [Azure Active Directory editions](../fundamentals/what
## Publish your application in Azure
-This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If youΓÇÖve already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
+This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If you've already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
> [!NOTE] > Since this scenario is a partnership between Azure AD and PingAccess, some of the instructions exist on the Ping Identity site.
To publish your own on-premises application:
> [!NOTE] > For a more detailed walkthrough of this step, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad).
- 1. **Internal URL**: Normally you provide the URL that takes you to the appΓÇÖs sign-in page when youΓÇÖre on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
+ 1. **Internal URL**: Normally you provide the URL that takes you to the app's sign-in page when you're on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
> [!WARNING] > For this type of single sign-on, the internal URL must use `https` and can't use `http`. Also, there is a constraint when configuring an application that no two apps should have the same internal URL as this allows App Proxy to maintain distinction between applications.
To publish your own on-premises application:
1. **Translate URL in Headers**: Choose **No**. > [!NOTE]
- > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
+ > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener you've configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
1. Select **Add**. The overview page for the new application appears.
In addition to the external URL, an authorize endpoint of Azure Active Directory
Finally, set up your on-premises application so that users have read access and other applications have read/write access:
-1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the APIs for Windows Azure Active Directory.
+1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the permissions for Microsoft Graph.
![Shows the Request API permissions page](./media/application-proxy-configure-single-sign-on-with-ping-access/required-permissions.png)
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
-+ Last updated 08/29/2022
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
-+ Last updated 08/29/2022
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
-+ Last updated 08/29/2022
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
-+ Last updated 08/29/2022
active-directory Architecture Icons https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/architecture-icons.md
+
+ Title: Microsoft Entra architecture icons
+description: Learn about the official collection of Microsoft Entra icons that you can use in architectural diagrams, training materials, or documentation.
+++++ Last updated : 08/15/2023+++
+# Customer intent: As a new or existing customer, I want to learn how I can use the official Microsoft Entra icons in architectural diagrams, training materials, or documentation.
++
+# Microsoft Entra architecture icons
+
+Helping our customers design and architect new solutions is core to the Microsoft Entra mission. Architecture diagrams can help communicate design decisions and the relationships between components of a given workload. This article provides information about the official collection of Microsoft Entra icons that you can use in architectural diagrams, training materials, or documentation.
+
+## General guidelines
+
+### Do's
+
+- Use the icon to illustrate how products can work together.
+- In diagrams, we recommend including the product name somewhere close to the icon.
+
+### Don'ts
+
+- Don't crop, flip, or rotate icons.
+- Don't distort or change the icon shape in any way.
+- Don't use Microsoft product icons to represent your product or service.
+- Don't use Microsoft product icons in marketing communications.
+
+## Icon updates
+
+| Month | Change description |
+|-|--|
+| August 2023 | Added a downloadable package that contains the Microsoft Entra architecture icons, branding playbook (which contains guidelines about the Microsoft Security visual identity), and terms of use. |
+
+## Icon terms
+
+Microsoft permits the use of these icons in architectural diagrams, training materials, or documentation. You may copy, distribute, and display the icons only for the permitted use unless granted explicit permission by Microsoft. Microsoft reserves all other rights.
+
+ > [!div class="button"]
+ > [I agree to the above terms. Download icons.](https://download.microsoft.com/download/a/4/2/a4289cad-4eaf-4580-87fd-ce999a601516/Microsoft-Entra-architecture-icons.zip?wt.mc_id=microsoftentraicons_downloadmicrosoftentraicons_content_cnl_csasci)
+
+## More icon sets from Microsoft
+
+- [Azure architecture icons](/azure/architecture/icons)
+- [Microsoft 365 architecture icons and templates](/microsoft-365/solutions/architecture-icons-templates)
+- [Dynamics 365 icons](/dynamics365/get-started/icons)
+- [Microsoft Power Platform icons](/power-platform/guidance/icons)
active-directory Govern Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md
Last updated 02/09/2023 -+
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md
Last updated 04/19/2023 -+ # Common considerations for multi-tenant user management
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md
Last updated 04/19/2023 -+
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recoverability-overview.md
Create a process of predefined communications to make others aware of the issue
Document the state of your tenant and its objects regularly. Then if a hard delete or misconfiguration occurs, you have a roadmap to recovery. The following tools can help you document your current state: - [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations.-- [Azure AD Exporter](https://github.com/microsoft/azureadexporter) is a tool you can use to export your configuration settings.
+- [Entra Exporter](https://github.com/microsoft/entraexporter) is a tool you can use to export your configuration settings.
- [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) is a module of the PowerShell Desired State Configuration framework. You can use it to export configurations for reference and application of the prior state of many settings. - [Conditional Access APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) can be used to manage your Conditional Access policies as code.
active-directory Resilient External Processes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-external-processes.md
Identity experience framework (IEF) policies allow you to call an external syste
- If the data that is necessary for authentication is relatively static and small, and has no other business reason to be externalized from the directory, then consider having it in the directory. -- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and cripple your application. For example, using CAPTCHA in your sign in, sign up flow can help.
+- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and disable your application. For example, using CAPTCHA in your sign in, sign up flow can help.
- Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, it's likely that you don't have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale.
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md
Last updated 02/07/2023 -+
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md
Last updated 02/08/2023 -+
active-directory Certificate Based Authentication Federation Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-android.md
description: Learn about the supported scenarios and the requirements for config
+ Last updated 09/30/2022
active-directory Certificate Based Authentication Federation Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-get-started.md
description: Learn how to configure certificate-based authentication with federa
+ Last updated 05/04/2022
- # Get started with certificate-based authentication in Azure Active Directory with federation
active-directory Certificate Based Authentication Federation Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-ios.md
description: Learn about the supported scenarios and the requirements for config
+ Last updated 09/30/2022
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
Previously updated : 06/22/2023 Last updated : 08/16/2023
The following table lists each setting that can be set to Microsoft managed and
| [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled |
+| [Report suspicious activity](howto-mfa-mfasettings.md#report-suspicious-activity) | Disabled |
As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication.
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
-OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse).
:::image type="content" border="true" source="./media/concept-authentication-methods/oath-tokens.png" alt-text="Screenshot of OATH token management." lightbox="./media/concept-authentication-methods/oath-tokens.png":::
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
-+ # Certificate user IDs
active-directory Concept Mfa Regional Opt In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-regional-opt-in.md
For Voice verification, the following region codes require an opt-in.
| 236 | Central African Republic | | 237 | Cameroon | | 238 | Cabo Verde |
-| 239 | Sao Tome and Principe |
+| 239 | São Tomé and Príncipe |
| 240 | Equatorial Guinea | | 241 | Gabon | | 242 | Congo |
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
description: Learn about the combined password policy and check for weak passwor
+ Last updated 04/02/2023
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
tags: azuread+
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
-+ # Password policies and account restrictions in Azure Active Directory
active-directory Concepts Azure Multi Factor Authentication Prompts Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
description: Learn about the recommended configuration for reauthentication prom
+ Previously updated : 03/28/2023 Last updated : 08/15/2023
Azure Active Directory (Azure AD) has multiple settings that determine how often
The Azure AD default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
-It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken).
+It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions by using Microsoft Graph PowerShell](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession).
This article details recommended configurations and how different settings work and interact with each other.
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
-+ # How to configure Azure AD certificate-based authentication
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
To disable Authenticator Lite in the Azure portal, complete the following steps:
Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. Android users utilizing a personal and work profile on their device may be prompted to register if Authenticator is present on a different profile from the Outlook application.
-<img width="1112" alt="Entra portal Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png">
+<img width="1112" alt="Microsoft Entra admin center Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png">
3. On the Configure tab, for **Microsoft Authenticator on companion applications**, change Status to Disabled, and click Save.
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
description: Step-by-step guidance to migrate from MFA Server on-premises to Azu
+ Last updated 01/29/2023
active-directory How To Migrate Mfa Server To Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md
Title: Migrate to Azure AD MFA with federations
description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA with federation + Last updated 05/23/2023
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
description: Enable passwordless sign-in to Azure AD using Microsoft Authenticat
+ Last updated 05/16/2023
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
description: Learn how to enable users to sign in to Azure Active Directory with
+ Last updated 06/01/2023
- # Sign-in to Azure AD with email as an alternate login ID (Preview) > [!NOTE]
-> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse).
Many organizations want to let users sign in to Azure Active Directory (Azure AD) using the same credentials as their on-premises directory environment. With this approach, known as hybrid authentication, users only need to remember one set of credentials.
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
Title: Deployment considerations for Azure AD Multi-Factor Authentication
description: Learn about deployment considerations and strategy for successful implementation of Azure AD Multi-Factor Authentication + Last updated 03/06/2023
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 07/17/2023 Last updated : 08/16/2023 -+
To unblock a user, complete the following steps:
Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](../identity-protection/concept-identity-protection-policies.md).
-To enable **Report suspicious activity** from the Authentication Methods Settings:
+To enable **Report suspicious activity** from the Authentication methods **Settings**:
1. In the Azure portal, click **Azure Active Directory** > **Security** > **Authentication Methods** > **Settings**.
-1. Set **Report suspicious activity** to **Enabled**.
+1. Set **Report suspicious activity** to **Enabled**. The feature remains disabled if you choose **Microsoft managed**. For more information about Microsoft managed values, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md).
1. Select **All users** or a specific group.
+1. Select a **Reporting code**.
+1. Click **Save**.
+
+>[!NOTE]
+>If you enable **Report suspicious activity** and specify a custom voice reporting value while the tenant still has **Fraud Alert** enabled in parallel with a custom voice reporting number configured, the **Report suspicious activity** value will be used instead of **Fraud Alert**.
### View suspicious activity events
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
-OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms).
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse).
![Screenshot that shows the OATH tokens section.](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png)
The following table lists more numbers for different countries.
| Sri Lanka | +94 117750440 | | Sweden | +46 701924176 | | Taiwan | +886 277515260 |
-| Turkey | +90 8505404893 |
+| T├╝rkiye | +90 8505404893 |
| Ukraine | +380 443332393 | | United Arab Emirates | +971 44015046 | | Vietnam | +84 2039990161 |
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent
| **REQUEST_FORMAT_ERROR** <br> Radius Request missing mandatory Radius userName\Identifier attribute.Verify that NPS is receiving RADIUS requests | This error usually reflects an installation issue. The NPS extension must be installed in NPS servers that can receive RADIUS requests. NPS servers that are installed as dependencies for services like RDG and RRAS don't receive radius requests. NPS Extension does not work when installed over such installations and errors out since it cannot read the details from the authentication request. | | **REQUEST_MISSING_CODE** | Make sure that the password encryption protocol between the NPS and NAS servers supports the secondary authentication method that you're using. **PAP** supports all the authentication methods of Azure AD MFA in the cloud: phone call, one-way text message, mobile app notification, and mobile app verification code. **CHAPV2** and **EAP** support phone call and mobile app notification. | | **USERNAME_CANONICALIZATION_ERROR** | Verify that the user is present in your on-premises Active Directory instance, and that the NPS Service has permissions to access the directory. If you are using cross-forest trusts, [contact support](#contact-microsoft-support) for further help. |
+| **Challenge requested in Authentication Ext for User** | Organizations using a RADIUS protocol other than PAP will observe user VPN authorization failing with these events appearing in the AuthZOptCh event log of the NPS Extension server. You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. For further help, please check [Number matching using NPS Extension](how-to-mfa-number-match.md#nps-extension). |
### Alternate login ID errors
active-directory Howto Mfa Nps Extension Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md
description: Integrate your Remote Desktop Gateway infrastructure with Azure AD
+ Last updated 01/29/2023
active-directory Howto Mfa Nps Extension Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md
description: Integrate your VPN infrastructure with Azure AD MFA by using the Ne
+ Last updated 01/29/2023
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
-+ # Integrate your existing Network Policy Server (NPS) infrastructure with Azure AD Multi-Factor Authentication
active-directory Howto Mfa Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md
-+ # Use the sign-ins report to review Azure AD Multi-Factor Authentication events
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
-+ # Enable per-user Azure AD Multi-Factor Authentication to secure sign-in events
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Based on your organizational requirements, you can customize the Azure AD smart
To check or modify the smart lockout values for your organization, complete the following steps:
-1. Sign in to the [Entra portal](https://entra.microsoft.com/#home).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
1. Search for and select *Azure Active Directory*, then select **Security** > **Authentication methods** > **Password protection**. 1. Set the **Lockout threshold**, based on how many failed sign-ins are allowed on an account before its first lockout.
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
description: Troubleshoot Azure AD Multi-Factor Authentication and self-service
+ Last updated 01/29/2023
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
-+ # Pre-populate user authentication contact information for Azure Active Directory self-service password reset (SSPR)
active-directory V1 Permissions Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-permissions-consent.md
Last updated 09/24/2018 -+
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
The following messaging protocols support legacy authentication:
- Universal Outlook - Used by the Mail and Calendar app for Windows 10. - Other clients - Other protocols identified as utilizing legacy authentication.
-For more information about these authentication protocols and services, see [Sign-in activity reports in the Azure portal](../reports-monitoring/concept-sign-ins.md#filter-sign-in-activities).
+For more information about these authentication protocols and services, see [Sign-in activity reports](../reports-monitoring/concept-sign-ins.md#filter-sign-in-activities).
### Identify legacy authentication use
Before you can block legacy authentication in your directory, you need to first
#### Sign-in log indicators
-1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Azure Active Directory** > **Sign-in logs**.
1. Add the **Client App** column if it isn't shown by clicking on **Columns** > **Client App**. 1. Select **Add filters** > **Client App** > choose all of the legacy authentication protocols and select **Apply**. 1. If you've activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
## Create a Conditional Access policy
-Filter for devices is an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API.
+Filter for devices is an optional control when creating a Conditional Access policy.
The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios). Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
description: What are cloud apps, actions, and authentication context in an Azur
+ Last updated 06/27/2023
For example, an organization may keep files in SharePoint sites like the lunch m
### Configure authentication contexts
-Authentication contexts are managed in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Authentication context**.
+Authentication contexts are managed under **Azure Active Directory** > **Security** > **Conditional Access** > **Authentication context**.
-![Manage authentication context in the Azure portal](./media/concept-conditional-access-cloud-apps/conditional-access-authentication-context-get-started.png)
+![Manage authentication context](./media/concept-conditional-access-cloud-apps/conditional-access-authentication-context-get-started.png)
-Create new authentication context definitions by selecting **New authentication context** in the Azure portal. Organizations are limited to a total of 25 authentication context definitions. Configure the following attributes:
+Create new authentication context definitions by selecting **New authentication context**. Organizations are limited to a total of 25 authentication context definitions. Configure the following attributes:
- **Display name** is the name that is used to identify the authentication context in Azure AD and across applications that consume authentication contexts. We recommend names that can be used across resources, like "trusted devices", to reduce the number of authentication contexts needed. Having a reduced set limits the number of redirects and provides a better end to end-user experience. - **Description** provides more information about the policies it's used by Azure AD administrators and those applying authentication contexts to resources.
active-directory Concept Conditional Access Policy Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md
Policies in this category provide new ways to protect against compromise.
-Find these templates in the **[Microsoft Entra admin center](https://entra.microsoft.com)** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category.
+Find these templates in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category.
:::image type="content" source="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png" alt-text="Screenshot that shows how to create a Conditional Access policy from a preconfigured template in the Microsoft Entra admin center." lightbox="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png"::: > [!IMPORTANT]
-> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. Simply navigate to **Microsoft Entra admin center** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Policies**, select the policy to open the editor and modify the excluded users and groups to select accounts you want to exclude.
+> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. You can find these policies in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Policies**. Select a policy to open the editor and modify the excluded users and groups to select accounts you want to exclude.
By default, each policy is created in [report-only mode](concept-conditional-access-report-only.md), we recommended organizations test and monitor usage, to ensure intended result, before turning on each policy.
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
For more information, see the article [Configure authentication session manageme
- **Disable** only work when **All cloud apps** are selected, no conditions are selected, and **Disable** is selected under **Session** > **Customize continuous access evaluation** in a Conditional Access policy. You can choose to disable all users or specific users and groups. ## Disable resilience defaults
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
By default the policy provides an option to exclude the current user from the po
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-If you do find yourself locked out, see [What to do if you're locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
+If you do find yourself locked out, see [What to do if you're locked out?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out)
### External partner access
active-directory Concept Continuous Access Evaluation Strict Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-strict-enforcement.md
Repeat steps 2 and 3 with expanding groups of users until Strictly Enforce Locat
Administrators can investigate the Sign-in logs to find cases with **IP address (seen by resource)**.
-1. Sign in to the **Azure portal** as at least a Global Reader.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
1. Browse to **Azure Active Directory** > **Sign-ins**. 1. Find events to review by adding filters and columns to filter out unnecessary information. 1. Add the **IP address (seen by resource)** column and filter out any blank items to narrow the scope. The **IP address (seen by resource)** is blank when that IP seen by Azure AD matches the IP address seen by the resource.
active-directory Concept Continuous Access Evaluation Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md
Last updated 07/22/2022
-+
When a clientΓÇÖs access to a resource is blocked due to CAE being triggered, th
The following steps detail how an admin can verify sign in activity in the sign-in logs:
-1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process. 1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt.
The following steps detail how an admin can verify sign in activity in the sign-
- [Register an application with Azure AD and create a service principal](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) - [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) - [Sample application using continuous access evaluation](https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae)
+- [Securing workload identities with Azure AD Identity Protection](../identity-protection/concept-workload-identity-risk.md)
- [What is continuous access evaluation?](../conditional-access/concept-continuous-access-evaluation.md)
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Customers who have configured CAE settings under Security before have to migrate
:::image type="content" source="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png" alt-text="Portal view showing the option to migrate continuous access evaluation to a Conditional Access policy." lightbox="media/concept-continuous-access-evaluation/migrate-continuous-access-evaluation.png":::
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**. 1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point. 1. Browse to **Conditional Access** and you find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it.
Changes made to Conditional Access policies and group membership made by adminis
When Conditional Access policy or group membership changes need to be applied to certain users immediately, you have two options. - Run the [revoke-mgusersign PowerShell command](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession) to revoke all refresh tokens of a specified user.-- Select "Revoke Session" on the user profile page in the Azure portal to revoke the user's session to ensure that the updated policies are applied immediately.
+- Select "Revoke Session" on the user profile page to revoke the user's session to ensure that the updated policies are applied immediately.
### IP address variation and networks with IP address shared or unknown egress IPs
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
Custom security attributes are security sensitive and can only be managed by del
1. Assign the appropriate role to the users who will manage or report on these attributes at the directory scope.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+ For detailed steps, see [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md).
## Create custom security attributes
Follow the instructions in the article, [Add or deactivate custom security attri
:::image type="content" source="media/concept-filter-for-applications/edit-filter-for-applications.png" alt-text="A screenshot showing a Conditional Access policy with the edit filter window showing an attribute of require MFA." lightbox="media/concept-filter-for-applications/edit-filter-for-applications.png":::
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Set up a sample application that, demonstrates how a job or a Windows service ca
When you don't have a service principal listed in your tenant, it can't be targeted. The Office 365 suite is an example of one such service principal.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Select the service principal you want to apply a custom security attribute to. 1. Under **Manage** > **Custom security attributes (preview)**, select **Add assignment**.
active-directory Concept Token Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md
Users who perform specialized roles like those described in [Privileged access s
The steps that follow help create a Conditional Access policy to require token protection for Exchange Online and SharePoint Online on Windows devices.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Monitoring Conditional Access enforcement of token protection before and after e
Use Azure AD sign-in log to verify the outcome of a token protection enforcement policy in report only mode or in enabled mode.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Sign-in logs**. 1. Select a specific request to determine if the policy is applied or not. 1. Go to the **Conditional Access** or **Report-Only** pane depending on its state and select the name of your policy requiring token protection.
active-directory How To App Protection Policy Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-app-protection-policy-windows.md
The following policy is put in to [Report-only mode](howto-conditional-access-in
The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [Preview: App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows).
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory How To Policy Mfa Admin Portals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-mfa-admin-portals.md
Microsoft recommends securing access to any Microsoft admin portals like Microso
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory How To Policy Phish Resistant Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-phish-resistant-admin-mfa.md
Organizations can choose to include or exclude roles as they see fit.
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md
description: Using the Azure AD Conditional Access APIs and PowerShell to manage
+ Last updated 09/10/2020
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
If you haven't integrated Azure AD logs with Azure Monitor logs, you need to tak
To access the insights and reporting workbook:
-1. Sign in to the **Azure portal**.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Insights and reporting**. ### Get started: Select parameters
You can also investigate the sign-ins of a specific user by searching for sign-i
To configure a Conditional Access policy in report-only mode:
-1. Sign into the **Azure portal** as a Conditional Access Administrator, security administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select an existing policy or create a new policy. 1. Under **Enable policy** set the toggle to **Report-only** mode.
To configure a Conditional Access policy in report-only mode:
### Why are queries failing due to a permissions error?
-In order to access the workbook, you need the proper Azure AD permissions and Log Analytics workspace permissions. To test whether you have the proper workspace permissions by running a sample log analytics query:
+In order to access the workbook, you need the proper permissions in Azure AD and Log Analytics. To test whether you have the proper workspace permissions by running a sample log analytics query:
-1. Sign in to the **Azure portal**.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Log Analytics**. 1. Type `SigninLogs` into the query box and select **Run**. 1. If the query doesn't return any results, your workspace may not have been configured correctly.
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Organizations can choose to include or exclude roles as they see fit.
The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication. Some organizations may be ready to move to stronger authentication methods for their administrators. These organizations may choose to implement a policy like the one described in the article [Require phishing-resistant multifactor authentication for administrators](how-to-policy-phish-resistant-admin-mfa.md).
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Organizations that use [Subscription Activation](/windows/deployment/windows-10-
The following steps help create a Conditional Access policy to require all users do multifactor authentication.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Authentication Strength External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md
The authentication methods that external users can use to satisfy MFA requiremen
Determine if one of the built-in authentication strengths will work for your scenario or if you'll need to create a custom authentication strength.
-1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Authentication methods** > **Authentication strengths**.
1. Review the built-in authentication strengths to see if one of them meets your requirements. 1. If you want to enforce a different set of authentication methods, [create a custom authentication strength](https://aka.ms/b2b-auth-strengths).
Determine if one of the built-in authentication strengths will work for your sce
Use the following steps to create a Conditional Access policy that applies an authentication strength to external users.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
The following steps will help create a Conditional Access policy to require user
> [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Block Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md
For organizations with a conservative cloud migration approach, the block all policy is an option that can be used. > [!CAUTION]
-> Misconfiguration of a block policy can lead to organizations being locked out of the Azure portal.
+> Misconfiguration of a block policy can lead to organizations being locked out.
Policies like these can have unintended side effects. Proper testing and validation are vital before enabling. Administrators should utilize tools such as [Conditional Access report-only mode](concept-conditional-access-report-only.md) and [the What If tool in Conditional Access](what-if-tool.md) when making changes.
The following steps will help create Conditional Access policies to block access
The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Block Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Compliant Device Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md
Organizations can choose to include or exclude roles as they see fit.
The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Requiring a hybrid Azure AD joined device is dependent on your devices already b
The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
With the location condition in Conditional Access, you can control access to you
## Define locations
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Named locations**. 1. Choose the type of location to create. 1. **Countries location** or **IP ranges location**.
More information about the location condition in Conditional Access can be found
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Organizations can choose to deploy this policy using the steps outlined below or
The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to be in a trusted network location, do multifactor authentication or use Temporary Access Pass credentials.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration with TAP**.
Organizations may choose to require other grant controls with or in place of **R
For [guest users](../external-identities/what-is-b2b.md) who need to register for multifactor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**.
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Organizations can choose to deploy this policy using the steps outlined below or
## Enable with Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
description: Customize Azure AD authentication session configuration including u
+ Last updated 07/18/2023
To make sure that your policy works as expected, the recommended best practice i
### Policy 1: Sign-in frequency control
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
To make sure that your policy works as expected, the recommended best practice i
### Policy 2: Persistent browser session
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
To make sure that your policy works as expected, the recommended best practice i
1. Select **Persistent browser session**. > [!NOTE]
- > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane in the Azure portal for the same user if you have configured both policies.
+ > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane for the same user if you have configured both policies.
1. Select a value from dropdown. 1. Save your policy. ### Policy 3: Sign-in frequency control every time risky user
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
Administrators can monitor and troubleshoot sign in events where [continuous acc
Administrators can monitor user sign-ins where continuous access evaluation (CAE) is applied. This information is found in the Azure AD sign-in logs:
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Sign-in logs**. 1. Apply the **Is CAE Token** filter.
The continuous access evaluation insights workbook allows administrators to view
Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Workbooks**. 1. Under **Public Templates**, search for **Continuous access evaluation insights**.
Admins can view records filtered by time range and application. Admins can compa
To unblock users, administrators can add specific IP addresses to a trusted named location.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations. > [!NOTE]
active-directory Howto Policy App Enforced Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-app-enforced-restriction.md
Block or limit access to SharePoint, OneDrive, and Exchange content from unmanag
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
The following steps will help create a Conditional Access policy requiring an ap
Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates).
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
After administrators confirm the settings using [report-only mode](howto-conditi
This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy Guest Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-guest-mfa.md
Require guest users perform multifactor authentication when accessing your organ
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy Persistent Browser Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-persistent-browser-session.md
Protect user access on unmanaged devices by preventing browser sessions from rem
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Policy Unknown Unsupported Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-unknown-unsupported-device.md
Users will be blocked from accessing company resources when the device type is u
## Create a Conditional Access policy
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
The location found using the public IP address a client provides to Azure Active
## Named locations
-Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.
+Locations exist under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions.
> [!VIDEO https://www.youtube.com/embed/P80SffTIThY]
To define a named location by IPv4/IPv6 address ranges, you need to provide:
- One or more IP ranges. - Optionally **Mark as trusted location**.
-![New IP locations in the Azure portal](./media/location-condition/new-trusted-location.png)
+![New IP locations](./media/location-condition/new-trusted-location.png)
Named locations defined by IPv4/IPv6 address ranges are subject to the following limitations:
To define a named location by country/region, you need to provide:
- Add one or more countries/regions. - Optionally choose to **Include unknown countries/regions**.
-![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png)
+![Country as a location](./media/location-condition/new-named-location-country-region.png)
If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries/regions to block traffic from countries/regions where they don't do business.
active-directory Migrate Approved Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/migrate-approved-client-app.md
The following steps make an existing Conditional Access policy require an approv
Organizations can choose to update their policies using the following steps.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select a policy that uses the approved client app grant. 1. Under **Access controls** > **Grant**, select **Grant access**.
The following steps help create a Conditional Access policy requiring an approve
Organizations can choose to deploy this policy using the following steps.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md
Administrators can create policies from scratch or start from a template policy
Administrators with the [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) role can manage policies in Azure AD.
-Conditional Access is found in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access**.
+Conditional Access is found in the [Microsoft Entra admin center](https://entra.microsoft.com) under **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**.
- The **Overview** page provides a summary of policy state, users, devices, and applications as well as general and security alerts with suggestions. - The **Coverage** page provides a synopsis of applications with and without Conditional Access policy coverage over the last seven days.
active-directory Plan Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md
Taking into account our learnings in the use of Conditional Access and supportin
**Ensure that every app has at least one Conditional Access policy applied**. From a security perspective it's better to create a policy that encompasses **All cloud apps**, and then exclude applications that you don't want the policy to apply to. This practice ensures you don't need to update Conditional Access policies every time you onboard a new application. > [!TIP]
-> Be very careful in using block and all apps in a single policy. This could lock admins out of the Azure portal, and exclusions cannot be configured for important endpoints such as Microsoft Graph.
+> Be very careful in using block and all apps in a single policy. This could lock admins out, and exclusions cannot be configured for important endpoints such as Microsoft Graph.
### Minimize the number of Conditional Access policies
active-directory Policy Migration Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration-mfa.md
# Migrate a classic policy in the Azure portal
-This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies in the Azure portal](policy-migration.md) before you start migrating your classic policies.
+This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies](policy-migration.md) before you start migrating your classic policies.
![Classic policy details requiring MFA for Salesforce app](./media/policy-migration/33.png)
The migration process consists of the following steps:
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
-1. Navigate to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
1. Select, **Classic policies**.
The migration process consists of the following steps:
1. In the list of classic policies, select the policy you wish to migrate. Document the configuration settings so that you can re-create with a new Conditional Access policy.
-For examples of common policies and their configuration in the Azure portal, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md).
+For examples of common policies and their configuration, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md).
## Disable the classic policy
active-directory Require Tou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md
In this quickstart, you'll configure a Conditional Access policy in Azure Active
To complete the scenario in this quickstart, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability. You can sign up for a trial in the Azure portal.
+- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability.
- A test account to sign-in with - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user). ## Sign-in without terms of use
This section provides you with the steps to create a sample ToU. When you create
1. In Microsoft Word, create a new document. 1. Type **My terms of use**, and then save the document on your computer as **mytou.pdf**.
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
- :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use shown in the Azure portal highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png":::
+ :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png":::
1. In the menu on the top, select **New terms**.
- :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy in the Azure portal." lightbox="media/require-tou/new-terms-of-use-creation.png":::
+ :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy." lightbox="media/require-tou/new-terms-of-use-creation.png":::
1. In the **Name** textbox, type **My TOU**. 1. Upload your terms of use PDF file.
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
You can configure Conditional Access resilience defaults from the Azure portal,
### Azure portal
-1. Navigate to the **Azure portal** > **Security** > **Conditional Access**
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**
1. Create a new policy or select an existing policy 1. Open the Session control settings 1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select, **New terms**. ![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png)
-1. In the **Name** box, enter a name for the terms of use policy used in the Azure portal.
+1. In the **Name** box, enter a name for the terms of use policy.
1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it. 1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user sees is based on their browser preferences. 1. In the **Display name** box, enter a title that users see when they sign in.
Once you've completed your terms of use policy document, use the following proce
The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use policy.
-1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**.
![Terms of use blade listing the number of user show have accepted and declined](./media/terms-of-use/view-tou.png)
If you want to view more activity, Azure AD terms of use policies include audit
To get started with Azure AD audit logs, use the following procedure:
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select a terms of use policy. 1. Select **View audit logs**.
Users can review and see the terms of use policies that they've accepted by usin
You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**.
You can edit some details of terms of use policies, but you can't modify an exis
## Update the version or pdf of an existing terms of use
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**.
You can edit some details of terms of use policies, but you can't modify an exis
## View previous versions of a ToU
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy for which you want to view a version history. 1. Select **Languages and version history**
You can edit some details of terms of use policies, but you can't modify an exis
## See who has accepted each version
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want. 1. By default, the next page will show you the current state of each user's acceptance to the ToU
You can edit some details of terms of use policies, but you can't modify an exis
The following procedure describes how to add a ToU language.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit Terms**
If a user is using browser that isn't supported, they're asked to use a differen
You can delete old terms of use policies using the following procedure.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to remove. 1. Select **Delete terms**.
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Organizations should avoid the following configurations:
**For all users, all cloud apps:** - **Block access** - This configuration blocks your entire organization.-- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.
+- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back in to change the policy.
- **Require Hybrid Azure AD domain joined device** - This policy block access has also the potential to block access for all users in your organization if they don't have a hybrid Azure AD joined device. - **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you're an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure.
More information can be found about the problem by clicking **More Details** in
To find out which Conditional Access policy or policies applied and why do the following.
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Global Reader.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Sign-ins**. 1. Find the event for the sign-in to review. Add or remove filters and columns to filter out unnecessary information. 1. Add filters to narrow the scope:
To determine the service dependency, check the sign-ins log for the application
:::image type="content" source="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png" alt-text="Screenshot that shows an example sign-in log showing an Application calling a Resource. This scenario is also known as a service dependency." lightbox="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png":::
-## What to do if you're locked out of the Azure portal?
+## What to do if you're locked out?
-If you're locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
+If you're locked out of the due to an incorrect setting in a Conditional Access policy:
-- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access to the Azure portal can disable the policy that is impacting your sign-in.
+- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access can disable the policy that is impacting your sign-in.
- If none of the administrators in your organization can update the policy, submit a support request. Microsoft support can review and upon confirmation update the Conditional Access policies that are preventing access. ## Next steps - [Use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md)-- [Sign-in activity reports in the Azure portal](../reports-monitoring/concept-sign-ins.md)
+- [Sign-in activity reports](../reports-monitoring/concept-sign-ins.md)
- [Troubleshooting Conditional Access using the What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
Find these options in the **Azure portal** > **Azure Active Directory**, **Diagn
## Use the audit log
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Audit logs**. 1. Select the **Date** range you want to query. 1. From the **Service** filter, select **Conditional Access** and select the **Apply** button.
active-directory What If Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/what-if-tool.md
When the evaluation has finished, the tool generates a report of the affected po
## Running the tool
-You can find the **What If** tool in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**.
+You can find the **What If** tool under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**.
Before you can run the What If tool, you must provide the conditions you want to evaluate.
Before you can run the What If tool, you must provide the conditions you want to
The only condition you must make is selecting a user or workload identity. All other conditions are optional. For a definition of these conditions, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md). ## Evaluation
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
# Conditional Access for workload identities
-Conditional Access policies have historically applied only to users when they access apps and services like SharePoint online or the Azure portal. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
+Conditional Access policies have historically applied only to users when they access apps and services like SharePoint Online. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities.
A [workload identity](../workload-identities/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they:
Conditional Access for workload identities enables blocking service principals f
Create a location based Conditional Access policy that applies to service principals.
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Create a risk-based Conditional Access policy that applies to service principals
:::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png":::
-1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
If you wish to roll back this feature, you can delete or disable any created pol
The sign-in logs are used to review how policy is enforced for service principals or the expected affects of policy when using report-only mode.
-1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service principal sign-ins**.
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Monitoring & health** > **Sign-in logs** > **Service principal sign-ins**.
1. Select a log entry and choose the **Conditional Access** tab to view evaluation information. Failure reason when Service Principal is blocked by Conditional Access: ΓÇ£Access has been blocked due to Conditional Access policies.ΓÇ¥
To view results of a risk-based policy, refer to the **Report-only** tab of even
You can get the objectID of the service principal from Azure AD Enterprise Applications. The Object ID in Azure AD App registrations canΓÇÖt be used. This identifier is the Object ID of the app registration, not of the service principal.
-1. Browse to the **Azure portal** > **Azure Active Directory** > **Enterprise Applications**, find the application you registered.
+1. Browse to **Microsoft Entra ID (Azure AD)** > **Applications** > **Enterprise Applications**, find the application you registered.
1. From the **Overview** tab, copy the **Object ID** of the application. This identifier is the unique to the service principal, used by Conditional Access policy to find the calling app. ### Microsoft Graph
active-directory Api Find An Api How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/api-find-an-api-how-to.md
- Title: Find an API for a custom-developed app
-description: How to configure the permissions you need to access a particular API in your custom developed Azure AD application
-------- Previously updated : 09/27/2021----
-# How to find a specific API needed for a custom-developed application
-
-Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration.
-
-## Configuring a resource application to expose web APIs
-
-When you expose your web API, the API be displayed in the **Select an API** list when adding permissions to an app registration. To add access scopes, follow the steps outlined in [Configure an application to expose web APIs](quickstart-configure-app-expose-web-apis.md).
-
-## Configuring a client application to access web APIs
-
-When you add permissions to your app registration, you can **add API access** to exposed web APIs. To access web APIs, follow the steps outlined in [Configure a client application to access web APIs](quickstart-configure-app-access-web-apis.md).
-
-## Next steps
--- [Understanding the Azure Active Directory application manifest](./reference-app-manifest.md)
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Last updated 05/22/2023 -+
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Title: Microsoft identity platform authentication flows & app scenarios
+ Title: Microsoft identity platform app types and authentication flows
description: Learn about application scenarios for the Microsoft identity platform, including authenticating identities, acquiring tokens, and calling protected APIs. Previously updated : 05/05/2022 Last updated : 08/11/2023
-#Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
+# Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
-# Authentication flows and application scenarios
+# Microsoft identity platform app types and authentication flows
The Microsoft identity platform supports authentication for different kinds of modern application architectures. All of the architectures are based on the industry-standard protocols [OAuth 2.0 and OpenID Connect](./v2-protocols.md). By using the [authentication libraries for the Microsoft identity platform](reference-v2-libraries.md), applications authenticate identities and acquire tokens to access protected APIs.
This article describes authentication flows and the application scenarios that t
## Application categories
-Tokens can be acquired from several types of applications, including:
+[Security tokens](./security-tokens.md) can be acquired from several types of applications, including:
- Web apps - Mobile apps
The following sections describe the categories of applications.
Authentication scenarios involve two activities: -- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md), developed and supported by Microsoft.
+- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](msal-overview.md), developed and supported by Microsoft.
- **Protecting a web API or a web app**: One challenge of protecting these resources is validating the security token. On some platforms, Microsoft offers [middleware libraries](reference-v2-libraries.md). ### With users or without users
The available authentication flows differ depending on the sign-in audience. Som
For more information, see [Supported account types](v2-supported-account-types.md#account-type-support-in-authentication-flows).
-## Application scenarios
+## Application types
The Microsoft identity platform supports authentication for these app architectures:
For a desktop app to call a web API that signs in users, use the interactive tok
There's another possibility for Windows-hosted applications on computers joined either to a Windows domain or by Azure Active Directory (Azure AD). These applications can silently acquire a token by using [integrated Windows authentication](https://aka.ms/msal-net-iwa).
-Applications running on a device without a browser can still call an API on behalf of a user. To authenticate, the user must sign in on another device that has a web browser. This scenario requires that you use the [device code flow](https://aka.ms/msal-net-device-code-flow).
+Applications running on a device without a browser can still call an API on behalf of a user. To authenticate, the user must sign in on another device that has a web browser. This scenario requires that you use the [device code flow](v2-oauth2-device-code.md).
![Device code flow](media/scenarios/device-code-flow-app.svg)
Similar to a desktop app, a mobile app calls the interactive token-acquisition m
MSAL iOS and MSAL Android use the system web browser by default. However, you can direct them to use the embedded web view instead. There are specificities that depend on the mobile platform: Universal Windows Platform (UWP), iOS, or Android.
-Some scenarios, like those that involve Conditional Access related to a device ID or a device enrollment, require a broker to be installed on the device. Examples of brokers are Microsoft Company Portal on Android and Microsoft Authenticator on Android and iOS. MSAL can now interact with brokers. For more information about brokers, see [Leveraging brokers on Android and iOS](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/leveraging-brokers-on-Android-and-iOS).
+Some scenarios, like those that involve Conditional Access related to a device ID or a device enrollment, require a broker to be installed on the device. Examples of brokers are Microsoft Company Portal on Android and Microsoft Authenticator on Android and iOS. MSAL can now interact with brokers. For more information about brokers, see [Leveraging brokers on Android and iOS](msal-net-use-brokers-with-xamarin-apps.md).
For more information, see [Mobile app that calls web APIs](scenario-mobile-overview.md).
active-directory Authentication Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-protocols.md
- Title: Microsoft identity platform authentication protocols
-description: An overview of the authentication protocols supported by the Microsoft identity platform
-------- Previously updated : 09/27/2021------
-# Microsoft identity platform authentication protocols
-
-The Microsoft identity platform supports several of the most widely used authentication and authorization protocols. The topics in this section describe the supported protocols and their implementation in Microsoft identity platform. The topics included a review of supported claim types, an introduction to the use of federation metadata, detailed OAuth 2.0. and SAML 2.0 protocol reference documentation, and a troubleshooting section.
-
-## Authentication protocols articles and reference
-
-* [Important Information About Signing Key Rollover in Microsoft identity platform](./signing-key-rollover.md) ΓÇô Learn about Microsoft identity platformΓÇÖs signing key rollover cadence, changes you can make to update the key automatically, and discussion for how to update the most common application scenarios.
-* [Supported Token and Claim Types](id-tokens.md) - Learn about the claims in the tokens that the Microsoft identity platform issues.
-* [OAuth 2.0 in Microsoft identity platform](v2-oauth2-auth-code-flow.md) - Learn about the implementation of OAuth 2.0 in Microsoft identity platform.
-* [OpenID Connect 1.0](v2-protocols-oidc.md) - Learn how to use OAuth 2.0, an authorization protocol, for authentication.
-* [Service to Service Calls with Client Credentials](v2-oauth2-client-creds-grant-flow.md) - Learn how to use OAuth 2.0 client credentials grant flow for service to service calls.
-* [Service to Service Calls with On-Behalf-Of Flow](v2-oauth2-on-behalf-of-flow.md) - Learn how to use OAuth 2.0 On-Behalf-Of flow for service to service calls.
-* [SAML Protocol Reference](./saml-protocol-reference.md) - Learn about the Single Sign-On and Single Sign-out SAML profiles of Microsoft identity platform.
-
-## See also
-
-* [Microsoft identity platform overview](v2-overview.md)
-* [Active Directory Code Samples](sample-v2-code.md)
active-directory Consent Framework Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework-links.md
- Title: How application consent works
-description: Learn more about how the Azure AD consent framework works to see how you can use it when developing applications on Azure AD
--------- Previously updated : 09/27/2021----
-# How application consent works
-
-This article is to help you learn more about how the Azure AD consent framework works so you can develop applications more effectively.
-
-## Recommended documents
--- Get a general understanding of [how consent allows a resource owner to govern an application's access to resources](./developer-glossary.md#consent).-- Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md).-- For more depth, learn [how a multi-tenant application can use the consent framework](./howto-convert-app-to-be-multi-tenant.md) to implement "user" and "admin" consent, supporting more advanced multi-tier application patterns.-- For more depth, learn [how consent is supported at the OAuth 2.0 protocol layer during the authorization code grant flow.](v2-oauth2-auth-code-flow.md#request-an-authorization-code)-
-## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
Previously updated : 05/23/2023 Last updated : 08/16/2023
# Configure a custom claim provider token issuance event (preview)
-This article describes how to configure and setup a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
+This article describes how to configure and set up a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application. Before you start, take a look at following video, which demonstrates how to configure Azure AD custom claims provider with Function App:
In this step, you configure a custom authentication extension, which will be use
# [Microsoft Graph](#tab/microsoft-graph)
-Create an Application Registration to authenticate your custom authentication extension to your Azure Function.
+Register an application to authenticate your custom authentication extension to your Azure Function.
-1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/applications`
-1. Select **Request Body** and paste the following JSON:
+1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. The account must have the privileges to create and manage an application registration in the tenant.
+2. Run the following request.
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/applications
+ Content-type: application/json
+
{
- "displayName": "authenticationeventsAPI"
+ "displayName": "authenticationeventsAPI"
} ```
-1. Select **Run Query** to submit the request.
-
-1. Copy the **Application ID** value (*appId*) from the response. You need this value later, which is referred to as the `{authenticationeventsAPI_AppId}`. Also get the object ID of the app (*ID*), which is referred to as `{authenticationeventsAPI_ObjectId}` from the response.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/csharp/v1/tutorial-application-basics-create-app-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/go/v1/tutorial-application-basics-create-app-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/javascript/v1/tutorial-application-basics-create-app-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ Snippet not available.
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/powershell/v1/tutorial-application-basics-create-app-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/python/v1/tutorial-application-basics-create-app-python-snippets.md)]
+
+
-Create a service principal in the tenant for the authenticationeventsAPI app registration:
+3. From the response, record the value of **id** and **appId** of the newly created app registration. These values will be referenced in this article as `{authenticationeventsAPI_ObjectId}` and `{authenticationeventsAPI_AppId}` respectively.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals`
-1. Select **Request Body** and paste the following JSON:
+Create a service principal in the tenant for the authenticationeventsAPI app registration.
- ```json
- {
- "appId": "{authenticationeventsAPI_AppId}"
- }
- ```
+Still in Graph Explorer, run the following request. Replace `{authenticationeventsAPI_AppId}` with the value of **appId** that you recorded from the previous step.
-1. Select **Run Query** to submit the request.
+```http
+POST https://graph.microsoft.com/v1.0/servicePrincipals
+Content-type: application/json
+
+{
+ "appId": "{authenticationeventsAPI_AppId}"
+}
+```
### Set the App ID URI, access token version, and required resource access Update the newly created application to set the application ID URI value, the access token version, and the required resource access.
-1. Set the HTTP method to **PATCH**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}`
-1. Select **Request Body** and paste the following JSON:
+In Graph Explorer, run the following request.
+ - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
+ - Set the `{authenticationeventsAPI_AppId}` value with the **appId** that you recorded earlier.
+ - An example value is `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as you'll use it later in this article in place of `{functionApp_IdentifierUri}`.
- Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
-
- Set the `{authenticationeventsAPI_AppId}` value with the App ID generated from the app registration created in the previous step.
-
- An example value would be `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as it is used in following steps and is referenced as `{functionApp_IdentifierUri}`.
-
- ```json
+```http
+POST https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}
+Content-type: application/json
+
+{
+"identifierUris": [
+ "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
+],
+"api": {
+ "requestedAccessTokenVersion": 2,
+ "acceptMappedClaims": null,
+ "knownClientApplications": [],
+ "oauth2PermissionScopes": [],
+ "preAuthorizedApplications": []
+},
+"requiredResourceAccess": [
{
- "identifierUris": [
- "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
- ],
- "api": {
- "requestedAccessTokenVersion": 2,
- "acceptMappedClaims": null,
- "knownClientApplications": [],
- "oauth2PermissionScopes": [],
- "preAuthorizedApplications": []
- },
- "requiredResourceAccess": [
+ "resourceAppId": "00000003-0000-0000-c000-000000000000",
+ "resourceAccess": [
{
- "resourceAppId": "00000003-0000-0000-c000-000000000000",
- "resourceAccess": [
- {
- "id": "214e810f-fda8-4fd7-a475-29461495eb00",
- "type": "Role"
- }
- ]
+ "id": "214e810f-fda8-4fd7-a475-29461495eb00",
+ "type": "Role"
} ] }
- ```
-
-1. Select **Run Query** to submit the request.
+]
+}
+```
### Register a custom authentication extension
-Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the App Registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
+Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the app registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/beta/identity/customAuthenticationExtensions`
-1. Select **Request Body** and paste the following JSON:
+1. In Graph Explorer, run the following request. Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+ - You'll need the *CustomAuthenticationExtension.ReadWrite.All* delegated permission.
- Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/beta/identity/customAuthenticationExtensions
+ Content-type: application/json
- ```json
{ "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtension", "displayName": "onTokenIssuanceStartCustomExtension",
Next, you register the custom authentication extension. You register the custom
] } ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
-1. Select **Run Query** to submit the request.
+
-Record the ID value of the created custom claims provider object. The ID is needed in a later step and is referred to as the `{customExtensionObjectId}`.
+1. Record the **id** value of the created custom claims provider object. You'll use the value later in this tutorial in place of `{customExtensionObjectId}`.
### 2.2 Grant admin consent
-After your custom authentication extension is created, you'll be taken to the **Overview** tab of the new custom authentication extension.
+After your custom authentication extension is created, open the **Overview** tab of the new custom authentication extension.
From the **Overview** page, select the **Grant permission** button to give admin consent to the registered app, which allows the custom authentication extension to authenticate to your API. The custom authentication extension uses `client_credentials` to authenticate to the Azure Function App using the `Receive custom authentication extension HTTP requests` permission.
The following screenshot shows how to register the *My Test application*.
### 3.1 Get the application ID
-In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps.
+In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. In Microsoft Graph, it's referenced by the **appId** propety.
:::image type="content" border="false"source="media/custom-extension-get-started/get-the-test-application-id.png" alt-text="Screenshot that shows how to copy the application ID.":::
Next, assign the attributes from the custom claims provider, which should be iss
# [Microsoft Graph](#tab/microsoft-graph)
-First create an event listener to trigger a custom authentication extension using the token issuance start event:
-
-1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/beta/identity/authenticationEventListeners`
-1. Select **Request Body** and paste the following JSON:
+First create an event listener to trigger a custom authentication extension for the *My Test application* using the token issuance start event.
- Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier.
+1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
+1. Run the following request. Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier.
+ - You'll need the *EventListener.ReadWrite.All* delegated permission.
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/beta/identity/authenticationEventListeners
+ Content-type: application/json
+
{ "@odata.type": "#microsoft.graph.onTokenIssuanceStartListener", "conditions": {
First create an event listener to trigger a custom authentication extension usin
} ```
-1. Select **Run Query** to submit the request.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+
+
-Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider:
+Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/policies/claimsmappingpolicies`
-1. Select **Request Body** and paste the following JSON:
+1. Still in Graph Explorer, run the following request. You'll need the *Policy.ReadWrite.ApplicationConfiguration* delegated permission.
++
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies
+ Content-type: application/json
- ```json
{ "definition": [ "{\"ClaimsMappingPolicy\":{\"Version\":1,\"IncludeBasicClaimSet\":\"true\",\"ClaimsSchema\":[{\"Source\":\"CustomClaimsProvider\",\"ID\":\"DateOfBirth\",\"JwtClaimType\":\"dob\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CustomRoles\",\"JwtClaimType\":\"my_roles\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CorrelationId\",\"JwtClaimType\":\"correlationId\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"ApiVersion\",\"JwtClaimType\":\"apiVersion \"},{\"Value\":\"tokenaug_V2\",\"JwtClaimType\":\"policy_version\"}]}}"
Next, create the claims mapping policy, which describes which claims can be issu
"isOrganizationDefault": false } ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-claimsmappingpolicies-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-claimsmappingpolicies-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-claimsmappingpolicies-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-claimsmappingpolicies-php-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-claimsmappingpolicies-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-claimsmappingpolicies-python-snippets.md)]
+
+
-1. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
-1. Select **Run Query** to submit the request.
+2. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
-Get the `servicePrincipal` objectId:
+Get the service principal object ID:
-1. Set the HTTP method to **GET**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')/claimsMappingPolicies/$ref`. Replace `{App_to_enrich_ID}` with *My Test Application* App ID.
-1. Record the `id` value, later it's referred to as `{test_App_Service_Principal_ObjectId}`.
+1. Run the following request in Graph Explorer. Replace `{App_to_enrich_ID}` with the **appId** of *My Test Application*.
-Assign the claims mapping policy to the `servicePrincipal` of *My Test Application*:
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')
+ ```
+
+Record the value of **id**.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref`
-1. Select **Request Body** and paste the following JSON:
+Assign the claims mapping policy to the service principal of *My Test Application*.
+
+1. Run the following request in Graph Explorer. You'll need the *Policy.ReadWrite.ApplicationConfiguration* and *Application.ReadWrite.All* delegated permission.
+
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref
+ Content-type: application/json
- ```json
{ "@odata.id": "https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/{claims_mapping_policy_ID}" } ```
-1. Select **Run Query** to submit the request.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-serviceprincipal-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-serviceprincipal-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-serviceprincipal-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-serviceprincipal-php-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-serviceprincipal-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-serviceprincipal-python-snippets.md)]
+
+
active-directory Delegated And App Perms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-and-app-perms.md
- Title: Differences between delegated and app permissions
-description: Learn about delegated and application permissions, how they are used by clients and exposed by resources for applications you are developing with Azure AD
--------- Previously updated : 11/10/2022----
-# How to recognize differences between delegated and application permissions
-
-## Recommended documents
--- Learn more about how client applications use [delegated and application permission requests](developer-glossary.md#permissions) to access resources.-- Learn about [delegated and application permissions](permissions-consent-overview.md).-- See step-by-step instructions on how to [configure a client application's permission requests](quickstart-configure-app-access-web-apis.md)-- For more depth, learn how resource applications expose [scopes](developer-glossary.md#scopes) and [application roles](developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal. -
-## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-applications-are-added.md
Last updated 10/26/2022 -+
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
To customize the start and expiry date and other properties of the certificate,
Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell.
-In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate.
+In a PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate.
```powershell $certname = "{certificateName}" ## Replace {certificateName}
active-directory Identity Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md
___
<!-- IMAGES -->
-[auth-fund-01-img]: ./media/identity-videos/aad-auth-fund-01.jpg
-[auth-fund-02-img]: ./media/identity-videos/aad-auth-fund-02.jpg
-[auth-fund-03-img]: ./media/identity-videos/aad-auth-fund-03.jpg
-[auth-fund-04-img]: ./media/identity-videos/aad-auth-fund-04.jpg
-[auth-fund-05-img]: ./media/identity-videos/aad-auth-fund-05.jpg
-[auth-fund-06-img]: ./media/identity-videos/aad-auth-fund-06.jpg
+[auth-fund-01-img]: ./media/identity-videos/auth-fund-01.jpg
+[auth-fund-02-img]: ./media/identity-videos/auth-fund-02.jpg
+[auth-fund-03-img]: ./media/identity-videos/auth-fund-03.jpg
+[auth-fund-04-img]: ./media/identity-videos/auth-fund-04.jpg
+[auth-fund-05-img]: ./media/identity-videos/auth-fund-05.jpg
+[auth-fund-06-img]: ./media/identity-videos/auth-fund-06.jpg
<!-- VIDEOS --> [auth-fund-01-vid]: https://www.youtube.com/watch?v=fbSVgC8nGz4&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=1
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Title: Mark an app as publisher verified
-description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Microsoft Partner Network (MPN) account that has completed the verification process and has associated this MPN account with that application registration.
+description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Cloud Partner Program (CPP) account that has completed the verification process and has associated this CPP account with that application registration.
Previously updated : 03/16/2023 Last updated : 08/17/2023
# Mark your app as publisher verified
-When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN) account and has associated this MPN account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process.
+When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Cloud Partner Program (CPP) account and has associated this CPP account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process.
## Quickstart
-If you are already enrolled in the Microsoft Partner Network (MPN) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away:
+If you are already enrolled in the [Cloud Partner Program (CPP)](/partner-center/intro-to-cloud-partner-program-membership) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away:
1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) 1. Choose an app and click **Branding & properties**.
-1. Click **Add MPN ID to verify publisher** and review the listed requirements.
+1. Click **Add Partner One ID to verify publisher** and review the listed requirements.
-1. Enter your MPN ID and click **Verify and save**.
+1. Enter your Partner One ID and click **Verify and save**.
For more details on specific benefits, requirements, and frequently asked questions see the [overview](publisher-verification-overview.md). ## Mark your app as publisher verified Make sure you meet the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified.
-1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the MPN Account in Partner Center.
+1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the CPP Account in Partner Center.
- The Azure AD user must have one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
- - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD).
+ - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): CPP Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD).
1. Navigate to the **App registrations** blade:
Make sure you meet the [pre-requisites](publisher-verification-overview.md#requi
1. Ensure the appΓÇÖs [publisher domain](howto-configure-publisher-domain.md) is set.
-1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your MPN account.
+1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your CPP account.
-1. Click **Add MPN ID to verify publisher** near the bottom of the page.
+1. Click **Add Partner One ID to verify publisher** near the bottom of the page.
-1. Enter the **MPN ID** for:
+1. Enter the **Partner One ID** for:
- - A valid Microsoft Partner Network account that has completed the verification process.
+ - A valid Cloud Partner Program account that has completed the verification process.
- The Partner global account (PGA) for your organization.
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Previously updated : 07/15/2022 Last updated : 08/11/2023
The authority you specify in your code needs to be consistent with the **Support
The authority can be: - An Azure AD cloud authority.-- An Azure AD B2C authority. See [B2C specifics](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AAD-B2C-specifics).-- An Active Directory Federation Services (AD FS) authority. See [AD FS support](https://aka.ms/msal-net-adfs-support).
+- An Azure AD B2C authority. See [B2C specifics](msal-net-b2c-considerations.md).
+- An Active Directory Federation Services (AD FS) authority. See [AD FS support](msal-net-adfs-support.md).
Azure AD cloud authorities have two parts:
You can override the redirect URI by using the `RedirectUri` property (for examp
- `RedirectUriOnAndroid` = "msauth-5a434691-ccb2-4fd1-b97b-b64bcfbc03fc://com.microsoft.identity.client.sample"; - `RedirectUriOnIos` = $"msauth.{Bundle.ID}://auth";
-For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Leveraging-the-broker-on-iOS).
+For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](msal-net-use-brokers-with-xamarin-apps.md).
For more Android details, see [Brokered auth in Android](msal-android-single-sign-on.md). ### Redirect URI for confidential client apps
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
Depending on the permissions they require, some applications might require an ad
Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph.
-## Next steps
+## See also
- [Delegated access scenario](delegated-access-primer.md) - [User and admin consent overview](../manage-apps/user-admin-consent-overview.md) - [OpenID connect scopes](scopes-oidc.md)
+-- [Making your application multi-tenant](./howto-convert-app-to-be-multi-tenant.md)
+- [AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Perms For Given Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/perms-for-given-api.md
- Title: Select permissions for a given API
-description: Learn about how permissions requests work for client and resource applications for applications you are developing
--------- Previously updated : 11/10/2022----
-# How to select permissions for a given API
-
-## Recommended documents
--- Learn more about how client applications use [delegated and application permission requests](./developer-glossary.md#permissions) to access resources.-- Learn about [scopes and permissions in the Microsoft identity platform](scopes-oidc.md)-- See step-by-step instructions on how to [configure a client application's permission requests](./quickstart-configure-app-access-web-apis.md)-- For more depth, learn how resource applications expose [scopes](./developer-glossary.md#scopes) and [application roles](./developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal.-
-## Next steps
-
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Previously updated : 08/11/2023 Last updated : 08/17/2023
Publisher verification gives app users and organization admins information about the authenticity of the developer's organization, who publishes an app that integrates with the Microsoft identity platform.
-When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (MCPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration.
+When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (CPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration.
When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages:
Publisher verification for an app has the following benefits:
App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements. -- The developer must have an MPN ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.
+- The developer must have an Partner One ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The CPP account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.
> [!NOTE]
- > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process.
+ > The CPP account you use for publisher verification can't be your partner location Partner One ID. Currently, location Partner One IDs aren't supported for the publisher verification process.
- The app that's to be publisher verified must be registered by using an Azure AD work or school account. Apps that are registered by using a Microsoft account can't be publisher verified. -- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
+- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the CPP PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
- The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set. The feature is not supported in Azure AD B2C tenant. -- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified)
+- The domain of the email address that's used during CPP account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified)
-- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.
+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the CPP account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.
- In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator.
- - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD).
+ - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): CPP Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD).
- The user who initiates verification must sign in by using [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md).
active-directory Quickstart Configure App Access Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
By specifying a web API's scopes in your client app's registration, the client a
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
+Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration.
+ In the first scenario, you grant a client app access to your own web API, both of which you should have registered as part of the prerequisites. If you don't yet have both a client app and a web API registered, complete the steps in the two [Prerequisites](#prerequisites) articles. This diagram shows how the two app registrations relate to one another. In this section, you add permissions to the client app's registration.
active-directory Quickstart Configure App Expose Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md
In this quickstart, you'll register a web API with the Microsoft identity platfo
## Register the web API
+Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration.
+ To provide scoped access to the resources in your web API, you first need to register the API with the Microsoft identity platform. Perform the steps in the **Register an application** section of [Quickstart: Register an app with the Microsoft identity platform](quickstart-register-app.md).
active-directory Registration Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-how-to.md
- Title: Get the endpoints for an Azure AD app registration
-description: How to find the authentication endpoints for a custom application you're developing or registering with Azure AD.
--------- Previously updated : 11/09/2022----
-# How to discover endpoints
-
-You can find the authentication endpoints for your application in the [Azure portal](https://portal.azure.com).
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. Select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, and then select **Endpoints** in the top menu.
-
- The **Endpoints** page is displayed, showing the authentication endpoints for your tenant.
-
- Use the endpoint that matches the authentication protocol you're using in conjunction with the **Application (client) ID** to craft the authentication request specific to your application.
-
-**National clouds** (for example Azure AD China, Germany, and US Government) have their own app registration portal and Azure AD authentication endpoints. Learn more in the [National clouds overview](authentication-national-cloud.md).
-
-## Next steps
-
-For more information about endpoints in the different Azure environments, see the [National clouds overview](authentication-national-cloud.md).
active-directory Registration Config Specific Application Property How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-specific-application-property-how-to.md
- Title: Azure portal registration fields for custom-developed apps
-description: Guidance for registering a custom developed application with Azure AD
--------- Previously updated : 09/27/2021----
-# Azure portal registration fields for custom-developed apps
-
-This article gives you a brief description of all the available fields in the application registration form in the [Azure portal](https://portal.azure.com).
-
-## Register a new application
--- To register a new application, sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.--- From the left navigation pane, click **Azure Active Directory.**--- Choose **App registrations** and click **Add**.--- This open up the application registration form.-
-## Fields in the application registration form
-
-| Field | Description |
-|||
-| Name | The name of the application. It should have a minimum of four characters. |
-| Supported account types| Select which accounts you would like your application to support: accounts in this organizational directory only, accounts in any organizational directory, or accounts in any organizational directory and personal Microsoft accounts. |
-| Redirect URI (optional) | Select the type of app you're building, **Web** or **Public client (mobile & desktop)**, and then enter the redirect URI (or reply URL) for your application. For web applications, provide the base URL of your app. For example, http://localhost:31544 might be the URL for a web app running on your local machine. Users would use this URL to sign in to a web client application. For public client applications, provide the URI used by Azure AD to return token responses. Enter a value specific to your application, such as myapp://auth. To see specific examples for web applications or native applications, check out our [quickstarts](./index.yml).|
-
-Once you have filled the above fields, the application is registered in the Azure portal, and you are redirected to the application overview page. The settings pages in the left pane under **Manage** have more fields for you to customize your application. The tables below describe all the fields. You would only see a subset of these fields, depending on whether you created a web application or a public client application.
-
-### Overview
-
-| Field | Description |
-|--|--|
-| Application ID | When you register an application, Azure AD assigns your application an Application ID. The application ID can be used to uniquely identify your application in authentication requests to Azure AD, as well as to access resources like the Graph API. |
-| App ID URI | This should be a unique URI, usually of the form **https://&lt;tenant\_name&gt;/&lt;application\_name&gt;.** This is used during the authorization grant flow, as a unique identifier to specify the resource that the token should be issued for. It also becomes the 'aud' claim in the issued access token. |
-
-### Branding
-
-| Field | Description |
-|--|--|
-| Upload new logo | You can use this to upload a logo for your application. The logo must be in .bmp, .jpg or .png format, and the file size should be less than 100 KB. The dimensions for the image should be 215x215 pixels, with central image dimensions of 94x94 pixels.|
-| Home page URL | This is the sign-on URL specified during application registration.|
-
-### Authentication
-
-| Field | Description |
-|--|--|
-| Front-channel logout URL | This is the single sign-out logout URL. Azure AD sends a logout request to this URL when the user clears their session with Azure AD using any other registered application.|
-| Supported account types | This switch specifies whether the application can be used by multiple tenants. Typically, this means that external organizations can use your application by registering it in their tenant and granting access to their organization's data.|
-| Redirect URLs | The redirect, or reply, URLs are the endpoints where Azure AD returns any tokens that your application requests. For native applications, this is where the user is sent after successful authorization. Azure AD checks that the redirect URI your application supplies in the OAuth 2.0 request matches one of the registered values in the portal.|
-
-### Certificates and secrets
-
-| Field | Description |
-|--|--|
-| Client secrets | You can create client secrets, or keys, to programmatically access web APIs secured by Azure AD without any user interaction. From the **New client secret** page, enter a key description and the expiration date and save to generate the key. Make sure to save it somewhere secure, as you won't be able to access it later. |
-
-## Next steps
-
-[Managing Applications with Azure Active Directory](../manage-apps/what-is-application-management.md)
active-directory Registration Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-sso-how-to.md
- Title: Configure application single sign-on
-description: How to configure single sign-on for a custom application you are developing and registering with Azure AD.
--------- Previously updated : 07/15/2019----
-# How to configure single sign-on for an application
-
-Enabling federated single sign-on (SSO) in your app is automatically enabled when federating through Azure AD for OpenID Connect, SAML 2.0, or WS-Fed. If your end users are having to sign in despite already having an existing session with Azure AD, itΓÇÖs likely your app may be misconfigured.
-
-* If youΓÇÖre using Microsoft Authentication Library (MSAL), make sure you have **PromptBehavior** set to **Auto** rather than **Always**.
-
-* If youΓÇÖre building a mobile app, you may need additional configurations to enable brokered or non-brokered SSO.
-
-For Android, see [Enabling Cross App SSO in Android](msal-android-single-sign-on.md).
-
-For iOS, see [Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md).
-
-## Next steps
-
-[Azure AD SSO](../manage-apps/what-is-single-sign-on.md)<br>
-
-[Enabling Cross App SSO in Android](msal-android-single-sign-on.md)<br>
-
-[Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md)<br>
-
-[Integrating Apps to AzureAD](./quickstart-register-app.md)<br>
-
-[Permissions and consent in the Microsoft identity platform](./permissions-consent-overview.md)<br>
-
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
Previously updated : 05/06/2022 Last updated : 08/11/2023
These advanced steps are covered in chapter 3 of the [3-WebApp-multi-APIs](https
The code for ASP.NET is similar to the code shown for ASP.NET Core: -- A controller action, protected by an [Authorize] attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller. (ASP.NET uses `HttpContext.User`.)
-*Microsoft.Identity.Web* adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token.
+- A controller action, protected by an `[Authorize]` attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller (ASP.NET uses `HttpContext.User`). This ensures that only authenticated users can use the app.
+**Microsoft.Identity.Web** adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token.
-If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use *Microsoft.Identity.Web* to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK.
+If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use Microsoft.Identity.Web to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK.
To get an authorization header, you get an `IAuthorizationHeaderProvider` service from the controller using an extension method `GetAuthorizationHeaderProvider`. To get an authorization header to call an API on behalf of the user, use `CreateAuthorizationHeaderForUserAsync`. To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use `CreateAuthorizationHeaderForAppAsync`.
-The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated users can use the web app.
-- The following snippet shows the action of the `HomeController`, which gets an authorization header to call Microsoft Graph as a REST API: - ```csharp [Authorize] public class HomeController : Controller
public class HomeController : Controller
# [Java](#tab/java)
-In the Java sample, the code that calls an API is in the getUsersFromGraph method in [AuthPageController.java#L62](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthPageController.java#L62).
+In the Java sample, the code that calls an API is in the `getUsersFromGraph` method in [AuthPageController.java#L62](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthPageController.java#L62).
The method attempts to call `getAuthResultBySilentFlow`. If the user needs to consent to more scopes, the code processes the `MsalInteractionRequiredException` object to challenge the user.
public ModelAndView getUserFromGraph(HttpServletRequest httpRequest, HttpServlet
# [Node.js](#tab/nodejs)
-In the Node.js sample, the code that acquires a token is in the *acquireToken* method of the **AuthProvider** class.
+In the Node.js sample, the code that acquires a token is in the `acquireToken` method of the `AuthProvider` class.
:::code language="js" source="~/ms-identity-node/App/auth/AuthProvider.js" range="79-121":::
This access token is then used to handle requests to the `/profile` endpoint:
# [Python](#tab/python)
-In the Python sample, the code that calls the API is in `app.py`.
+In the Python sample, the code that calls the API is in *app.py*.
The code attempts to get a token from the token cache. If it can't get a token, it redirects the user to the sign-in route. Otherwise, it can proceed to call the API.
Move on to the next article in this scenario,
Move on to the next article in this scenario, [Call a web API](scenario-web-app-call-api-call-api.md?tabs=python). -+
active-directory Setup Multi Tenant App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/setup-multi-tenant-app.md
- Title: Configure a new multi-tenant application
-description: Learn how to configure an application as multi-tenant, and how multi-tenant applications work
--------- Previously updated : 11/10/2022----
-# How to configure a new multi-tenant application
-
-Here is a list of recommended topics to learn more about multi-tenant applications:
--- Get a general understanding of [what it means to be a multi-tenant application](./developer-glossary.md#multi-tenant-application)-- Learn about [tenancy in Azure Active Directory](single-and-multi-tenant-apps.md)-- Get a general understanding of [how to configure an application to be multi-tenant](./howto-convert-app-to-be-multi-tenant.md)-- Get a step-by-step overview of [how the Azure AD consent framework is used to implement consent](./quickstart-register-app.md), which is required for multi-tenant applications-- For more depth, learn [how a multi-tenant application is configured and coded end-to-end](./howto-convert-app-to-be-multi-tenant.md), including how to register, use the "common" endpoint, implement "user" and "admin" consent, how to implement more advanced multi-tier scenarios-
-## Next steps
-[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html)
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
Title: Azure single sign-on SAML protocol
+ Title: Single sign-on SAML protocol
description: This article describes the single sign-on (SSO) SAML protocol in Azure Active Directory documentationcenter: .net
Previously updated : 08/31/2022 Last updated : 08/11/2023
To request a user authentication, cloud services send an `AuthnRequest` element
| Parameter | Type | Description | | | | |
-| ID | Required | Azure AD uses this attribute to populate the `InResponseTo` attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, `id6c1c178c166d486687be4aaf5e482730` is a valid ID. |
-| Version | Required | This parameter should be set to **2.0**. |
-| IssueInstant | Required | This is a DateTime string with a UTC value and [round-trip format ("o")](/dotnet/standard/base-types/standard-date-and-time-format-strings). Azure AD expects a DateTime value of this type, but doesn't evaluate or use the value. |
-| AssertionConsumerServiceURL | Optional | If provided, this parameter must match the `RedirectUri` of the cloud service in Azure AD. |
-| ForceAuthn | Optional | This is a boolean value. If true, it means that the user will be forced to re-authenticate, even if they have a valid session with Azure AD. |
-| IsPassive | Optional | This is a boolean value that specifies whether Azure AD should authenticate the user silently, without user interaction, using the session cookie if one exists. If this is true, Azure AD will attempt to authenticate the user using the session cookie. |
-
-All other `AuthnRequest` attributes, such as Consent, Destination, AssertionConsumerServiceIndex, AttributeConsumerServiceIndex, and ProviderName are **ignored**.
+| `ID` | Required | Azure AD uses this attribute to populate the `InResponseTo` attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, `id6c1c178c166d486687be4aaf5e482730` is a valid ID. |
+| `Version` | Required | This parameter should be set to `2.0`. |
+| `IssueInstant` | Required | This is a DateTime string with a UTC value and [round-trip format ("o")](/dotnet/standard/base-types/standard-date-and-time-format-strings). Azure AD expects a DateTime value of this type, but doesn't evaluate or use the value. |
+| `AssertionConsumerServiceURL` | Optional | If provided, this parameter must match the `RedirectUri` of the cloud service in Azure AD. |
+| `ForceAuthn` | Optional | This is a boolean value. If true, it means that the user will be forced to re-authenticate, even if they have a valid session with Azure AD. |
+| `IsPassive` | Optional | This is a boolean value that specifies whether Azure AD should authenticate the user silently, without user interaction, using the session cookie if one exists. If this is true, Azure AD will attempt to authenticate the user using the session cookie. |
+
+All other `AuthnRequest` attributes, such as `Consent`, `Destination`, `AssertionConsumerServiceIndex`, `AttributeConsumerServiceIndex`, and `ProviderName` are **ignored**.
Azure AD also ignores the `Conditions` element in `AuthnRequest`.
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Previously updated : 08/11/2023 Last updated : 08/17/2023
If you're unable to complete the process or are experiencing unexpected behavior
## Common Issues Below are some common issues that may occur during the process. -- **I donΓÇÖt know my Microsoft Partner Network ID (MPN ID) or I donΓÇÖt know who the primary contact for the account is.**
- 1. Navigate to the [MPN enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new).
+- **I donΓÇÖt know my Cloud Partner Program ID (Partner One ID) or I donΓÇÖt know who the primary contact for the account is.**
+ 1. Navigate to the [Cloud Partner Program enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new).
2. Sign in with a user account in the org's primary Azure AD tenant.
- 3. If an MPN account already exists, this is recognized and you are added to the account.
- 4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the MPN ID and primary account contact will be listed.
+ 3. If an Cloud Partner Program account already exists, this is recognized and you are added to the account.
+ 4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the Partner One ID and primary account contact will be listed.
- **I donΓÇÖt know who my Azure AD Global Administrator (also known as company admin or tenant admin) is, how do I find them? What about the Application Administrator or Cloud Application Administrator?** 1. Sign in to the [Azure portal](https://portal.azure.com) using a user account in your organization's primary tenant.
Below are some common issues that may occur during the process.
3. Select the desired admin role. 4. The list of users assigned that role will be displayed. -- **I don't know who the admin(s) for my MPN account are**
- Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles.
+- **I don't know who the admin(s) for my CPP account are**
+ Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles.
-- **I am getting an error saying that my MPN ID is invalid or that I do not have access to it.**
+- **I am getting an error saying that my Partner One ID is invalid or that I do not have access to it.**
Follow the [remediation guidance](#mpnaccountnotfoundornoaccess). - **When I sign in to the Azure portal, I do not see any apps registered. Why?**
Response
204 No Content ``` > [!NOTE]
-> *verifiedPublisherID* is your MPN ID.
+> *verifiedPublisherID* is your Partner One ID.
### Unset Verified Publisher
The following is a list of the potential error codes you may receive, either whe
### MPNAccountNotFoundOrNoAccess
-The MPN ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid Partner One ID and try again.
-Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the MPN account, or an invalid MPN ID.
+Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the CPP account, or an invalid Partner One ID.
**Remediation Steps** 1. Go to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that:
- - The MPN ID is correct.
+ - The Partner One ID is correct.
- There are no errors or ΓÇ£pending actionsΓÇ¥ shown, and the verification status under Legal business profile and Partner info both say ΓÇ£authorizedΓÇ¥ or ΓÇ£successΓÇ¥.
-2. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account.
-3. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions).
+2. Go to the [CPP tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account.
+3. Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions).
### MPNGlobalAccountNotFound
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused when an MPN ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details.
+Most commonly caused when an Partner One ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
Most commonly caused when an MPN ID is provided which corresponds to a Partner L
### MPNAccountInvalid
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused by the wrong MPN ID being provided.
+Most commonly caused by the wrong Partner One ID being provided.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
Most commonly caused by the wrong MPN ID being provided.
### MPNAccountNotVetted
-The MPN ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again.
+The Partner One ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again.
-Most commonly caused by when the MPN account hasn't completed the [verification](/partner-center/verification-responses) process.
+Most commonly caused by when the CPP account hasn't completed the [verification](/partner-center/verification-responses) process.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that there are no errors or **pending actions** shown, and that the verification status under Legal business profile and Partner info both say **authorized** or **success**.
Most commonly caused by when the MPN account hasn't completed the [verification]
### NoPublisherIdOnAssociatedMPNAccount
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused by the wrong MPN ID being provided.
+Most commonly caused by the wrong Partner One ID being provided.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
Most commonly caused by the wrong MPN ID being provided.
### MPNIdDoesNotMatchAssociatedMPNAccount
-The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
+The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again.
-Most commonly caused by the wrong MPN ID being provided.
+Most commonly caused by the wrong Partner One ID being provided.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
See [requirements](publisher-verification-overview.md) for a list of allowed dom
You aren't authorized to set the verified publisher property on application (<`AppId`).
-Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information.
+Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information.
**Remediation Steps** 1. Sign in to the [Azure AD Portal](https://aad.portal.azure.com) using a user account in your organization's primary tenant.
Most commonly caused by the signed-in user not being a member of the proper role
### MPNIdWasNotProvided
-The MPN ID wasn't provided in the request body or the request content type wasn't "application/json".
+The Partner One ID wasn't provided in the request body or the request content type wasn't "application/json".
-Most commonly caused when the verification is being performed via Graph API, and the MPN ID wasnΓÇÖt provided in the request.
+Most commonly caused when the verification is being performed via Graph API, and the Partner One ID wasnΓÇÖt provided in the request.
**Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab
If you've reviewed all of the previous information and are still receiving an er
- ObjectId of target application - AppId of target application - TenantId where app is registered-- MPN ID
+- Partner One ID
- REST request being made - Error code and message being returned
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
# Application types for the Microsoft identity platform
-The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](./v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios).
+The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](./v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-types).
## The basics
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
Title: Sign in with resource owner password credentials grant
+ Title: Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials
description: Support browser-less authentication flows using the resource owner password credential (ROPC) grant.
Previously updated : 08/26/2022 Last updated : 08/11/2023
The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password
> [!WARNING] > Microsoft recommends you do _not_ use the ROPC flow. In most scenarios, more secure alternatives are available and recommended. This flow requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows aren't viable. - > [!IMPORTANT] > > * The Microsoft identity platform only supports the ROPC grant within Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint.
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform
+ Title: Microsoft identity platform and OAuth 2.0 implicit grant flow
description: Secure single-page apps using Microsoft identity platform implicit flow.
Previously updated : 08/18/2022 Last updated : 08/11/2023
-# Microsoft identity platform and implicit grant flow
+# Microsoft identity platform and OAuth 2.0 implicit grant flow
The Microsoft identity platform supports the OAuth 2.0 implicit grant flow as described in the [OAuth 2.0 Specification](https://tools.ietf.org/html/rfc6749#section-4.2). The defining characteristic of the implicit grant is that tokens (ID tokens or access tokens) are returned directly from the /authorize endpoint instead of the /token endpoint. This is often used as part of the [authorization code flow](v2-oauth2-auth-code-flow.md), in what is called the "hybrid flow" - retrieving the ID token on the /authorize request along with an authorization code.
The following diagram shows what the entire implicit sign-in flow looks like and
To initially sign the user into your app, you can send an [OpenID Connect](v2-protocols-oidc.md) authentication request and get an `id_token` from the Microsoft identity platform. > [!IMPORTANT]
-> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned: `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'`
+> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned:
+>
+> `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'`
``` // Line breaks for legibility only
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| | | | | `tenant` | required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](./v2-protocols.md#endpoints).Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.| | `client_id` | required | The Application (client) ID that the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |
-| `response_type` | required |Must include `id_token` for OpenID Connect sign-in. It may also include the response_type `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, user.read on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This id_token+code response is sometimes called the hybrid flow. |
-| `redirect_uri` | recommended |The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be URL-encoded. |
-| `scope` | required |A space-separated list of [scopes](./permissions-consent-overview.md). For OpenID Connect (id_tokens), it must include the scope `openid`, which translates to the "Sign you in" permission in the consent UI. Optionally you may also want to include the `email` and `profile` scopes for gaining access to additional user data. You may also include other scopes in this request for requesting consent to various resources, if an access token is requested. |
+| `response_type` | required | Must include `id_token` for OpenID Connect sign-in. It may also include the `response_type`, `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, `user.read` on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This `id_token`+`code` response is sometimes called the hybrid flow. |
+| `redirect_uri` | recommended |The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. |
+| `scope` | required |A space-separated list of [scopes](./permissions-consent-overview.md). For OpenID Connect (`id_tokens`), it must include the scope `openid`, which translates to the "Sign you in" permission in the consent UI. Optionally you may also want to include the `email` and `profile` scopes for gaining access to additional user data. You may also include other scopes in this request for requesting consent to various resources, if an access token is requested. |
| `response_mode` | optional |Specifies the method that should be used to send the resulting token back to your app. Defaults to query for just an access token, but fragment if the request includes an id_token. | | `state` | recommended |A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. |
-| `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are 'login', 'none', 'select_account', and 'consent'. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. |
+| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. |
+| `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `select_account`, and `consent`. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via SSO, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. |
| `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](./optional-claims.md) from an earlier sign-in. | | `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they'll provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. This hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. |
code=0.AgAAktYV-sfpYESnQynylW_UKZmH-C9y_G1A
| | | | `code` | Included if `response_type` includes `code`. It's an authorization code suitable for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). | | `access_token` |Included if `response_type` includes `token`. The access token that the app requested. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. |
-| `token_type` |Included if `response_type` includes `token`. Will always be `Bearer`. |
+| `token_type` |Included if `response_type` includes `token`. This will always be `Bearer`. |
| `expires_in`|Included if `response_type` includes `token`. Indicates the number of seconds the token is valid, for caching purposes. | | `scope` |Included if `response_type` includes `token`. Indicates the scope(s) for which the access_token will be valid. May not include all the requested scopes if they weren't applicable to the user. For example, Azure AD-only scopes requested when logging in using a personal account. |
-| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. |
+| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about ID tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. |
| `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | [!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)]
For details on the query parameters in the URL, see [send the sign in request](#
> [!TIP] > Try copy & pasting the request below into a browser tab! (Don't forget to replace the `login_hint` values with the correct value for your user) >
->`https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username}`
+> ```
+> https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username}
+> ```
> > Note that this will work even in browsers without third party cookie support, since you're entering this directly into a browser bar as opposed to opening it within an iframe.
access_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q..
| Parameter | Description | | | | | `access_token` |Included if `response_type` includes `token`. The access token that the app requested, in this case for the Microsoft Graph. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. |
-| `token_type` | Will always be `Bearer`. |
+| `token_type` | This will always be `Bearer`. |
| `expires_in` | Indicates the number of seconds the token is valid, for caching purposes. |
-| `scope` | Indicates the scope(s) for which the access_token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
+| `scope` | Indicates the scope(s) for which the access token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). |
| `id_token` | A signed JSON Web Token (JWT). Included if `response_type` includes `id_token`. The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token` reference](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
If you receive this error in the iframe request, the user must interactively sig
## Refreshing tokens
-The implicit grant does not provide refresh tokens. Both `id_token`s and `access_token`s will expire after a short period of time, so your app must be prepared to refresh these tokens periodically. To refresh either type of token, you can perform the same hidden iframe request from above using the `prompt=none` parameter to control the identity platform's behavior. If you want to receive a new `id_token`, be sure to use `id_token` in the `response_type` and `scope=openid`, as well as a `nonce` parameter.
+The implicit grant does not provide refresh tokens. Both ID tokens and access tokens will expire after a short period of time, so your app must be prepared to refresh these tokens periodically. To refresh either type of token, you can perform the same hidden iframe request from above using the `prompt=none` parameter to control the identity platform's behavior. If you want to receive a new ID token, be sure to use `id_token` in the `response_type` and `scope=openid`, as well as a `nonce` parameter.
In browsers that do not support third party cookies, this will result in an error indicating that no user is signed in.
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
Previously updated : 10/27/2022 Last updated : 08/16/2023
When you connect a Windows device with Azure AD using an Azure AD join, Azure AD
- The Azure AD joined device local administrator role - The user performing the Azure AD join
-By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
+By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to users with the Global Administrator role, you can also enable users that have been *only* assigned the Azure AD Joined Device Local Administrator role to manage a device.
-## Manage the global administrators role
+## Manage the Global Administrator role
-To view and update the membership of the Global Administrator role, see:
+To view and update the membership of the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) role, see:
- [View all members of an administrator role in Azure Active Directory](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Azure Active Directory](../fundamentals/how-subscriptions-associated-directory.md)
-## Manage the device administrator role
+## Manage the Azure AD Joined Device Local Administrator role
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-In the Azure portal, you can manage the device administrator role from **Device settings**.
+You can manage the [Azure AD Joined Device Local Administrator](/azure/active-directory/roles/permissions-reference#azure-ad-joined-device-local-administrator) role from **Device settings**.
-1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
1. Browse to **Azure Active Directory** > **Devices** > **Device settings**. 1. Select **Manage Additional local administrators on all Azure AD joined devices**. 1. Select **Add assignments** then choose the other administrators you want to add and select **Add**.
-To modify the device administrator role, configure **Additional local administrators on all Azure AD joined devices**.
+To modify the Azure AD Joined Device Local Administrator role, configure **Additional local administrators on all Azure AD joined devices**.
> [!NOTE] > This option requires Azure AD Premium licenses.
-Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope device administrators to a specific set of devices. Updating the device administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen:
+Azure AD Joined Device Local Administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope this role to a specific set of devices. Updating the Azure AD Joined Device Local Administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen:
- Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile.
-Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token.
+Users aren't directly listed in the local administrator group, the permissions are received through the Primary Refresh Token.
> [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device. ## Manage administrator privileges using Azure AD groups (preview)
-Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you the granularity to configure distinct administrators for different groups of devices.
+Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you with the granularity to configure distinct administrators for different groups of devices.
Organizations can use Intune to manage these policies using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10) or [Account protection policy](/mem/intune/protect/endpoint-security-account-protection-policy). A few considerations for using this policy: -- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.
+- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID equates to the property `securityIdentifier` in the API response.
- Administrator privileges using this policy are evaluated only for the following well-known groups on a Windows 10 or newer device - Administrators, Users, Guests, Power Users, Remote Desktop Users and Remote Management Users.
By default, Azure AD adds the user performing the Azure AD join to the administr
- [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) - Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/intune/enrollment-autopilot#create-an-autopilot-deployment-profile).-- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an auto-created user. Users signing in after a device has been joined aren't added to the administrators group.
+- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group.
## Manually elevate a user on a device
Additionally, you can also add users using the command prompt:
## Considerations -- You can only assign role based groups to the device administrator role.-- Device administrators are assigned to all Azure AD Joined devices. They can't be scoped to a specific set of devices.
+- You can only assign role based groups to the Azure AD Joined Device Local Administrator role.
+- The Azure AD Joined Device Local Administrator role is assigned to all Azure AD Joined devices. This role can't be scoped to a specific set of devices.
- Local administrator rights on Windows devices aren't applicable to [Azure AD B2B guest users](../external-identities/what-is-b2b.md).-- When you remove users from the device administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.
+- When you remove users from the Azure AD Joined Device Local Administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.
## Next steps -- To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md).
+- To get an overview of how to manage devices, see [managing devices using the Azure portal](manage-device-identities.md).
- To learn more about device-based Conditional Access, see [Conditional Access: Require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md).
active-directory Device Join Out Of Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-out-of-box.md
Your device may restart several times as part of the setup process. Your device
:::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-device-sign-in-info.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the sign-in experience."::: 1. Continue to follow the prompts to set up your device. 1. Azure AD checks if an enrollment in mobile device management is required and starts the process.
- 1. Windows registers the device in the organizationΓÇÖs directory in Azure AD and enrolls it in mobile device management, if applicable.
+ 1. Windows registers the device in the organizationΓÇÖs directory and enrolls it in mobile device management, if applicable.
1. If you sign in with a managed user account, Windows takes you to the desktop through the automatic sign-in process. Federated users are directed to the Windows sign-in screen to enter your credentials. :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-complete-automatic-sign-in-desktop.png" alt-text="Screenshot of Windows 11 at the desktop after first run experience Azure AD joined.":::
To verify whether a device is joined to your Azure AD, review the **Access work
## Next steps -- For more information about managing devices in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md).
+- For more information about managing devices, see [managing devices using the Azure portal](manage-device-identities.md).
- [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune) - [Overview of Windows Autopilot](/mem/autopilot/windows-autopilot) - [Passwordless authentication options for Azure Active Directory](../authentication/concept-authentication-passwordless.md)
active-directory Enterprise State Roaming Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md
Enterprise State Roaming provides users with a unified experience across their W
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
1. Browse to **Azure Active Directory** > **Devices** > **Enterprise State Roaming**. 1. Select **Users may sync settings and app data across devices**. For more information, see [how to configure device settings](./manage-device-identities.md).
The country/region value is set as part of the Azure AD directory creation proce
Follow these steps to view a per-user device sync status report.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator).
1. Browse to **Azure Active Directory** > **Users** > **All users**. 1. Select the user, and then select **Devices**. 1. Select **View devices syncing settings and app data** to show sync status.
active-directory Enterprise State Roaming Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-troubleshooting.md
Enterprise State Roaming requires the device to be registered with Azure AD. Alt
**Potential issue**: **WamDefaultSet** and **AzureAdJoined** both have ΓÇ£NOΓÇ¥ in the field value, the device was domain-joined and registered with Azure AD, and the device doesn't sync. If it's showing this, the device may need to wait for policy to be applied or the authentication for the device failed when connecting to Azure AD. The user may have to wait a few hours for the policy to be applied. Other troubleshooting steps may include retrying autoregistration by signing out and back in, or launching the task in Task Scheduler. In some cases, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue.
-**Potential issue**: The field for **SettingsUrl** is empty and the device doesn't sync. The user may have last logged in to the device before Enterprise State Roaming was enabled in the Azure portal. Restart the device and have the user login. Optionally, in the portal, try having the IT Admin navigate to **Azure Active Directory** > **Devices** > **Enterprise State Roaming** disable and re-enable **Users may sync settings and app data across devices**. Once re-enabled, restart the device and have the user login. If this doesn't resolve the issue, **SettingsUrl** may be empty if there's a bad device certificate. In this case, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue.
+**Potential issue**: The field for **SettingsUrl** is empty and the device doesn't sync. The user may have last logged in to the device before Enterprise State Roaming was enabled. Restart the device and have the user login. Optionally, in the portal, try having the IT Admin navigate to **Azure Active Directory** > **Devices** > **Enterprise State Roaming** disable and re-enable **Users may sync settings and app data across devices**. Once re-enabled, restart the device and have the user login. If this doesn't resolve the issue, **SettingsUrl** may be empty if there's a bad device certificate. In this case, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue.
## Enterprise State Roaming and multifactor authentication
active-directory How To Hybrid Join Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-verify.md
description: Verify configurations for hybrid Azure AD joined devices
+ Last updated 02/27/2023
For downlevel devices, see the article [Troubleshooting hybrid Azure Active Dire
## Using the Azure portal
-1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
-2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./manage-device-identities.md).
-3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle.
-4. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Browse to **Azure Active Directory** > **Devices** > **All devices**.
+1. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle.
+1. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed.
## Using PowerShell
active-directory Howto Manage Local Admin Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md
Other than the built-in Azure AD roles of Cloud Device Administrator, Intune Adm
To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the devices you wish to manage. We recommend organizations [manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy). However, if your devices are Azure AD joined but you're not using Microsoft Intune or Microsoft Intune isn't supported (like for Windows Server 2019/2022), you can still deploy Windows LAPS for Azure AD manually. For more information, see the article [Configure Windows LAPS policy settings](/windows-server/identity/laps/laps-management-policy-settings).
-1. Sign in to the **Azure portal** as a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
+1. Sign in to the **Azure portal** as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator).
1. Browse to **Azure Active Directory** > **Devices** > **Device settings** 1. Select **Yes** for the Enable Local Administrator Password Solution (LAPS) setting and select **Save**. You may also use the Microsoft Graph API [Update deviceRegistrationPolicy](/graph/api/deviceregistrationpolicy-update?view=graph-rest-beta&preserve-view=true). 1. Configure a client-side policy and set the **BackUpDirectory** to be Azure AD.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
To configure role assignments for your Azure AD-enabled Linux VMs:
| Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity |
- ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the page for adding a role assignment.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
After a few moments, the security principal is assigned the role at the selected scope.
The application that appears in the Conditional Access policy is called *Azure L
If the Azure Linux VM Sign-In application is missing from Conditional Access, make sure the application isn't in the tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Remove the filters to see all applications, and search for **Virtual Machine**. If you don't see Microsoft Azure Linux Virtual Machine Sign-In as a result, the service principal is missing from the tenant.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
--+ # Log in to a Windows virtual machine in Azure by using Azure AD including passwordless
To configure role assignments for your Azure AD-enabled Windows Server 2019 Data
| Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity |
- ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the page for adding a role assignment.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
### Azure Cloud Shell
Exit code -2145648607 translates to `DSREG_AUTOJOIN_DISC_FAILED`. The extension
- `curl https://pas.windows.net/ -D -` > [!NOTE]
- > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID** in the Azure portal.
+ > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID**.
> > Attempts to connect to `enterpriseregistration.windows.net` might return 404 Not Found, which is expected behavior. Attempts to connect to `pas.windows.net` might prompt for PIN credentials or might return 404 Not Found. (You don't need to enter the PIN.) Either one is sufficient to verify that the URL is reachable.
Share your feedback about this feature or report problems with using it on the [
If the Azure Windows VM Sign-In application is missing from Conditional Access, make sure that the application is in the tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator).
1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Remove the filters to see all applications, and search for **VM**. If you don't see **Azure Windows VM Sign-In** as a result, the service principal is missing from the tenant.
active-directory Hybrid Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-manual.md
description: Learn how to manually configure hybrid Azure Active Directory join
+ Last updated 07/05/2022
The following script helps you with the creation of the issuance transform rules
#### Remarks * This script appends the rules to the existing rules. Don't run the script twice, because the set of rules would be added twice. Make sure that no corresponding rules exist for these claims (under the corresponding conditions) before running the script again.
-* If you have multiple verified domain names (as shown in the Azure portal or via the **Get-MsolDomain** cmdlet), set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule:
+* If you have multiple verified domain names, set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule:
``` c:[Type == "http://schemas.xmlsoap.org/claims/UPN"]
active-directory Manage Device Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md
Azure Active Directory (Azure AD) provides a central place to manage device identities and monitor related event information.
-[![Screenshot that shows the devices overview in the Azure portal.](./media/manage-device-identities/devices-azure-portal.png)](./media/manage-device-identities/devices-azure-portal.png#lightbox)
+[![Screenshot that shows the devices overview.](./media/manage-device-identities/devices-azure-portal.png)](./media/manage-device-identities/devices-azure-portal.png#lightbox)
You can access the devices overview by completing these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
1. Go to **Azure Active Directory** > **Devices**. In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring.
From there, you can go to **All devices** to:
- Review device-related audit logs. - Download devices.
-[![Screenshot that shows the All devices view in the Azure portal.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox)
+[![Screenshot that shows the All devices view.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox)
> [!TIP] > - Hybrid Azure AD joined Windows 10 or newer devices don't have an owner. If you're looking for a device by owner and don't find it, search by the device ID.
In this preview, you have the ability to infinitely scroll, reorder columns, and
To enable the preview in the **All devices** view:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
2. Go to **Azure Active Directory** > **Devices** > **All devices**. 3. Select the **Preview features** button. 4. Turn on the toggle that says **Enhanced devices list experience**. Select **Apply**.
The exported list includes these device identity attributes:
If you want to manage device identities by using the Azure portal, the devices need to be either [registered or joined](overview.md) to Azure AD. As an administrator, you can control the process of registering and joining devices by configuring the following device settings.
-You must be assigned one of the following roles to view device settings in the Azure portal:
+You must be assigned one of the following roles to view device settings:
- Global Administrator - Global Reader
You must be assigned one of the following roles to view device settings in the A
- Windows 365 Administrator - Directory Reviewer
-You must be assigned one of the following roles to manage device settings in the Azure portal:
+You must be assigned one of the following roles to manage device settings:
- Global Administrator - Cloud Device Administrator
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
description: Learn how to remove stale devices from your database of registered
+ Last updated 09/27/2022
-#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data.
-
+#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data.
# How To: Manage stale devices in Azure AD
If the delta between the existing value of the activity timestamp and the curren
You have two options to retrieve the value of the activity timestamp: -- The **Activity** column on the [devices page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices) in the Azure portal
+- The **Activity** column on the [devices page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices).
- :::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot of a page in the Azure portal listing the name, owner, and other information on devices. One column lists the activity time stamp." border="false":::
+ :::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot listing the name, owner, and other information of devices. One column lists the activity time stamp." border="false":::
-- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet
+- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet.
:::image type="content" source="./media/manage-stale-devices/02.png" alt-text="Screenshot showing command-line output. One line is highlighted and lists a time stamp for the ApproximateLastLogonTimeStamp value." border="false":::
Any authentication where a device is being used to authenticate to Azure AD are
Devices managed with Intune can be retired or wiped, for more information see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe).
-To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md)
+To get an overview of how to manage devices, see [managing devices using the Azure portal](manage-device-identities.md)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/overview.md
Getting devices in to Azure AD can be done in a self-service manner or a control
- Learn more about [Azure AD registered devices](concept-device-registration.md) - Learn more about [Azure AD joined devices](concept-directory-join.md) - Learn more about [hybrid Azure AD joined devices](concept-hybrid-join.md)-- To get an overview of how to manage device identities in the Azure portal, see [Managing device identities using the Azure portal](manage-device-identities.md).
+- To get an overview of how to manage device identities, see [Managing device identities using the Azure portal](manage-device-identities.md).
- To learn more about device-based Conditional Access, see [Configure Azure Active Directory device-based Conditional Access policies](../conditional-access/concept-conditional-access-grant.md).
active-directory Troubleshoot Device Windows Joined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-windows-joined.md
If you have a Windows 11 or Windows 10 device that isn't working with Azure Active Directory (Azure AD) correctly, start your troubleshooting here.
-1. Sign in to the **Azure portal**.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader).
1. Browse to **Azure Active Directory** > **Devices** > **Diagnose and solve problems**. 1. Select **Troubleshoot** under the **Windows 10+ related issue** troubleshooter. :::image type="content" source="media/troubleshoot-device-windows-joined/devices-troubleshoot-windows.png" alt-text="A screenshot showing the Windows troubleshooter located in the diagnose and solve pane of the Azure portal." lightbox="media/troubleshoot-device-windows-joined/devices-troubleshoot-windows.png":::
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Use Event Viewer to look for the log entries that are logged by the Azure AD Clo
| Error code | Reason | Resolution | | | | |
-| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled in the Azure portal. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. |
+| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. |
| **AADSTS50034: The user account `Account` does not exist in the `tenant id` directory** | Azure AD is unable to find the user account in the tenant. | <li>Ensure that the user is typing the correct UPN.<li>Ensure that the on-premises user account is being synced with Azure AD.<li>Event 1144 (Azure AD analytics logs) will contain the UPN provided. | | **AADSTS50126: Error validating credentials due to invalid username or password.** | <li>The username and password entered by the user in the Windows LoginUI are incorrect.<li>If the tenant has password hash sync enabled, the device is hybrid-joined, and the user just changed the password, it's likely that the new password hasn't synced with Azure AD. | To acquire a fresh PRT with the new credentials, wait for the Azure AD password sync to finish. | | | |
active-directory Troubleshoot Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-primary-refresh-token.md
You can find a full list and description of server error codes in [Azure AD auth
- Azure AD can't authenticate the device to issue a PRT. -- The device might have been deleted or disabled in the Azure portal. (For more information, see [Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10/11 devices?](./faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices))
+- The device might have been deleted or disabled. (For more information, see [Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10/11 devices?](./faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices))
##### Solution
-Re-register the device based on the device join type. For instructions, see [I disabled or deleted my device in the Azure portal or by using Windows PowerShell. But the local state on the device says it's still registered. What should I do?](./faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do).
+Re-register the device based on the device join type. For instructions, see [I disabled or deleted my device. But the local state on the device says it's still registered. What should I do?](./faq.yml#i-disabled-or-deleted-my-device--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do).
</details> <details>
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Last updated 10/03/2022 -+
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Last updated 03/02/2022 -+
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Last updated 06/23/2022 -+
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Last updated 06/23/2022 --+
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Last updated 06/28/2023 -+
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
Last updated 06/23/2022 -+
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
Last updated 06/24/2022 -+
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
Last updated 06/24/2022 -+
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Last updated 06/24/2022 -+ # Restore a deleted Microsoft 365 group in Azure Active Directory
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Last updated 06/12/2023 -+
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Last updated 06/24/2022 -+ # Azure Active Directory cmdlets for configuring group settings
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
Last updated 06/24/2022 -+ # Azure Active Directory version 2 cmdlets for group management
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
Last updated 01/09/2023 -+
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md
+ Last updated 12/02/2020
active-directory Linkedin Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md
Last updated 06/24/2022 -+
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
-+
active-directory Users Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md
-+
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
-+
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
Last updated 06/24/2022-+
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
description: Learn how to enforce multi-factor authentication policies for Azure
+ Last updated 04/17/2023
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Last updated 07/31/2023
--
-# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
+
+# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
Last updated 04/06/2023
-+ # Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file.
active-directory How To Customize Languages Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md
The following languages are supported in the customer tenant:
- Spanish (Spain) - Swedish (Sweden) - Thai (Thailand)
- - Turkish (Turkey)
+ - Turkish (T├╝rkiye)
- Ukrainian (Ukraine) 6. Customize the elements on the **Basics**, **Layout**, **Header**, **Footer**, **Sign-in form**, and **Text** tabs. For detailed instructions, see [Customize the branding and end-user experience](how-to-customize-branding-customers.md).
active-directory How To Facebook Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md
Last updated 06/20/2023 --+ #Customer intent: As a dev, devops, or it admin, I want to
At this point, the Facebook identity provider has been set up in your customer t
## Next steps - [Add Google as an identity provider](how-to-google-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
active-directory How To Google Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-google-federation-customers.md
Last updated 05/24/2023 --+ #Customer intent: As a dev, devops, or it admin, I want to
At this point, the Google identity provider has been set up in your Azure AD, bu
## Next steps - [Add Facebook as an identity provider](how-to-facebook-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
active-directory How To Single Page App Vanillajs Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-sign-in-sign-out.md
- Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant
-description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant.
-------- Previously updated : 05/25/2023
-#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
--
-# Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant
-
-In the [previous article](how-to-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality.
-
-In this tutorial;
-
-> [!div class="checklist"]
-> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface
-> * Add code to the *signout.html* file to create the sign-out page
-> * Sign in and sign out of the application
-
-## Prerequisites
-
-* Completion of the prerequisites and steps in [Create components for authentication and authorization](how-to-single-page-app-vanillajs-configure-authentication.md).
-
-## Add code to the *https://docsupdatetracker.net/index.html* file
-
-The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button.
-
-1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet:
-
- ```html
- <!DOCTYPE html>
- <html lang="en">
-
- <head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
- <title>Microsoft identity platform</title>
- <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
- <link rel="stylesheet" href="./styles.css">
-
- <!-- adding Bootstrap 5 for UI components -->
- <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet"
- integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous">
-
- <!-- msal.min.js can be used in the place of msal-browser.js -->
- <script src="/msal-browser.min.js"></script>
- </head>
-
- <body>
- <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle">
- <a class="navbar-brand" href="/">Microsoft identity platform</a>
- <div class="navbar-collapse justify-content-end">
- <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button>
- <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button>
- </div>
- </nav>
- <br>
- <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js
- </h5>
- <h5 id="welcome-div" class="card-header text-center d-none"></h5>
- <br>
- <div class="table-responsive-ms" id="table">
- <table id="table-div" class="table table-striped d-none">
- <thead id="table-head-div">
- <tr>
- <th>Claim Type</th>
- <th>Value</th>
- <th>Description</th>
- </tr>
- </thead>
- <tbody id="table-body-div">
- </tbody>
- </table>
- </div>
- <!-- importing bootstrap.js and supporting js libraries -->
- <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
- integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous">
- </script>
- <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"
- integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3"
- crossorigin="anonymous"></script>
- <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js"
- integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3"
- crossorigin="anonymous"></script>
-
- <!-- importing app scripts (load order is important) -->
- <script type="text/javascript" src="./authConfig.js"></script>
- <script type="text/javascript" src="./ui.js"></script>
- <script type="text/javascript" src="./claimUtils.js"></script>
- <!-- <script type="text/javascript" src="./authRedirect.js"></script> -->
- <!-- uncomment the above line and comment the line below if you would like to use the redirect flow -->
- <script type="text/javascript" src="./authPopup.js"></script>
- </body>
-
- </html>
- ```
-
-1. Save the file.
-
-## Add code to the *signout.html* file
-
-1. Open *public/signout.html* and add the following code snippet:
-
- ```html
- <!DOCTYPE html>
- <html lang="en">
- <head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0">
- <title>Azure AD | Vanilla JavaScript SPA</title>
- <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
-
- <!-- adding Bootstrap 4 for UI components -->
- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
- </head>
- <body>
- <div class="jumbotron" style="margin: 10%">
- <h1>Goodbye!</h1>
- <p>You have signed out and your cache has been cleared.</p>
- <a class="btn btn-primary" href="/" role="button">Take me back</a>
- </div>
- </body>
- </html>
- ```
-
-1. Save the file.
-
-## Add code to the *ui.js* file
-
-When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button.
-
-1. Open *public/ui.js* and add the following code snippet:
-
- ```javascript
- // Select DOM elements to work with
- const signInButton = document.getElementById('signIn');
- const signOutButton = document.getElementById('signOut');
- const titleDiv = document.getElementById('title-div');
- const welcomeDiv = document.getElementById('welcome-div');
- const tableDiv = document.getElementById('table-div');
- const tableBody = document.getElementById('table-body-div');
-
- function welcomeUser(username) {
- signInButton.classList.add('d-none');
- signOutButton.classList.remove('d-none');
- titleDiv.classList.add('d-none');
- welcomeDiv.classList.remove('d-none');
- welcomeDiv.innerHTML = `Welcome ${username}!`;
- };
-
- function updateTable(account) {
- tableDiv.classList.remove('d-none');
-
- const tokenClaims = createClaimsTable(account.idTokenClaims);
-
- Object.keys(tokenClaims).forEach((key) => {
- let row = tableBody.insertRow(0);
- let cell1 = row.insertCell(0);
- let cell2 = row.insertCell(1);
- let cell3 = row.insertCell(2);
- cell1.innerHTML = tokenClaims[key][0];
- cell2.innerHTML = tokenClaims[key][1];
- cell3.innerHTML = tokenClaims[key][2];
- });
- };
- ```
-
-1. Save the file.
-
-## Add code to the *styles.css* file
-
-1. Open *public/styles.css* and add the following code snippet:
-
- ```css
- .navbarStyle {
- padding: .5rem 1rem !important;
- }
-
- .table-responsive-ms {
- max-height: 39rem !important;
- padding-left: 10%;
- padding-right: 10%;
- }
- ```
-
-1. Save the file.
-
-## Run your project and sign in
-
-Now that all the required code snippets have been added, the application can be called and tested in a web browser.
-
-1. Open a new terminal and run the following command to start your express web server.
- ```powershell
- npm start
- ```
-1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`.
-1. Select **No account? Create one**, which starts the sign-up flow.
-1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application.
-1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
-
- 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
-
-1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data.
-
- :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
-
-## Sign out of the application
-
-1. To sign out of the application, select **Sign out** in the navigation bar.
-1. A window appears asking which account to sign out of.
-1. Upon successful sign out, a final window appears advising you to close all browser windows.
-
-## Next steps
--- [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory Sample Single Page App Vanillajs Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-vanillajs-sign-in.md
Title: Sign in users in a sample vanilla JavaScript single-page application
-description: Learn how to configure a sample JavaSCript single-page application (SPA) to sign in and sign out users.
+description: Learn how to configure a sample JavaScript single-page application (SPA) to sign in and sign out users.
Previously updated : 06/23/2023 Last updated : 08/17/2023 #Customer intent: As a dev, devops, I want to learn about how to configure a sample vanilla JS SPA to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
If you choose to download the `.zip` file, extract the sample app file to a fold
``` 1. Open a web browser and navigate to `http://localhost:3000/`.
-1. Select **No account? Create one**, which starts the sign-up flow.
-1. In the **Create account** window, enter the email address registered to your customer tenant, which starts the sign-up flow as a user for your application.
-1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
-1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
+1. Sign-in with an account registered to the customer tenant.
+1. Once signed in the display name is shown next to the **Sign out** button as shown in the following screenshot.
1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
active-directory Samples Ciam All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/samples-ciam-all.md
Previously updated : 07/17/2023 Last updated : 08/17/2023
These samples and how-to guides demonstrate how to integrate a single-page appli
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - |
-> | JavaScript, Vanilla | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) |
+> | JavaScript, Vanilla | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) |
> | JavaScript, Angular | &#8226; [Sign in users](./sample-single-page-app-angular-sign-in.md) | | > | JavaScript, React | &#8226; [Sign in users](./sample-single-page-app-react-sign-in.md) | &#8226; [Sign in users](./tutorial-single-page-app-react-sign-in-prepare-tenant.md) |
These samples and how-to guides demonstrate how to write a daemon application th
> [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - |
-> | Single-page application | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) |
+> | Single-page application | &#8226; [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | &#8226; [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) |
### JavaScript, Angular
active-directory Tutorial Single Page App Vanillajs Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-configure-authentication.md
+
+ Title: Tutorial - Handle authentication flows in a Vanilla JavaScript single-page app
+description: Learn how to configure authentication for a Vanilla JavaScript single-page app (SPA) with your Azure Active Directory (AD) for customers tenant.
+++++++++ Last updated : 08/17/2023
+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app
+
+In the [previous article](./tutorial-single-page-app-vanillajs-prepare-app.md), you created a Vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This tutorial demonstrates how to configure the application to authenticate and authorize users to access protected resources.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Configure the settings for the application
+> * Add code to *authRedirect.js* to handle the authentication flow
+> * Add code to *authPopup.js* to handle the authentication flow
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Prepare a single-page application for authentication](tutorial-single-page-app-vanillajs-prepare-app.md).
+
+## Edit the authentication configuration file
+
+The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-grant-flow.md) to authenticate users. The Implicit Grant Flow is a browser-based flow that doesn't require a back-end server. The flow redirects the user to the sign-in page, where the user signs in and consents to the permissions that are being requested by the application. The purpose of *authConfig.js* is to configure the authentication flow.
+
+1. Open *public/authConfig.js* and add the following code snippet:
+
+ ```javascript
+ /**
+ * Configuration object to be passed to MSAL instance on creation.
+ * For a full list of MSAL.js configuration parameters, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
+ */
+ const msalConfig = {
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply.
+ authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain
+ redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/
+ navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response.
+ },
+ cache: {
+ cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO.
+ storeAuthStateInCookie: false, // set this to true if you have to support IE
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback: (level, message, containsPii) => {
+ if (containsPii) {
+ return;
+ }
+ switch (level) {
+ case msal.LogLevel.Error:
+ console.error(message);
+ return;
+ case msal.LogLevel.Info:
+ console.info(message);
+ return;
+ case msal.LogLevel.Verbose:
+ console.debug(message);
+ return;
+ case msal.LogLevel.Warning:
+ console.warn(message);
+ return;
+ }
+ },
+ },
+ },
+ };
+
+ /**
+ * An optional silentRequest object can be used to achieve silent SSO
+ * between applications by providing a "login_hint" property.
+ */
+
+ // const silentRequest = {
+ // scopes: ["openid", "profile"],
+ // loginHint: "example@domain.net"
+ // };
+
+ // exporting config object for jest
+ if (typeof exports !== 'undefined') {
+ module.exports = {
+ msalConfig: msalConfig,
+ loginRequest: loginRequest,
+ };
+ }
+ ```
+
+1. Replace the following values with the values from the Azure portal:
+ - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center.
+ - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details).
+2. Save the file.
+
+## Adding code to the redirection file
+
+A redirection file is required to handle the response from the sign-in page. It is used to extract the access token from the URL fragment and use it to call the protected API. It is also used to handle errors that occur during the authentication process.
+
+1. Open *public/authRedirect.js* and add the following code snippet:
+
+ ```javascript
+ // Create the main myMSALObj instance
+ // configuration parameters are located at authConfig.js
+ const myMSALObj = new msal.PublicClientApplication(msalConfig);
+
+ let username = "";
+
+ /**
+ * A promise handler needs to be registered for handling the
+ * response returned from redirect flow. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md#redirect-apis
+ */
+ myMSALObj.handleRedirectPromise()
+ .then(handleResponse)
+ .catch((error) => {
+ console.error(error);
+ });
+
+ function selectAccount() {
+
+ /**
+ * See here for more info on account retrieval:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
+ */
+
+ const currentAccounts = myMSALObj.getAllAccounts();
+
+ if (!currentAccounts) {
+ return;
+ } else if (currentAccounts.length > 1) {
+ // Add your account choosing logic here
+ console.warn("Multiple accounts detected.");
+ } else if (currentAccounts.length === 1) {
+ welcomeUser(currentAccounts[0].username);
+ updateTable(currentAccounts[0]);
+ }
+ }
+
+ function handleResponse(response) {
+
+ /**
+ * To see the full list of response object properties, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response
+ */
+
+ if (response !== null) {
+ welcomeUser(response.account.username);
+ updateTable(response.account);
+ } else {
+ selectAccount();
+ }
+ }
+
+ function signIn() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ myMSALObj.loginRedirect(loginRequest);
+ }
+
+ function signOut() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ // Choose which account to logout from by passing a username.
+ const logoutRequest = {
+ account: myMSALObj.getAccountByUsername(username),
+ postLogoutRedirectUri: '/signout', // remove this line if you would like navigate to index page after logout.
+
+ };
+
+ myMSALObj.logoutRedirect(logoutRequest);
+ }
+ ```
+
+1. Save the file.
+
+## Adding code to the *authPopup.js* file
+
+The application uses *authPopup.js* to handle the authentication flow when the user signs in using the pop-up window. The pop-up window is used when the user is already signed in and the application needs to get an access token for a different resource.
+
+1. Open *public/authPopup.js* and add the following code snippet:
+
+ ```javascript
+ // Create the main myMSALObj instance
+ // configuration parameters are located at authConfig.js
+ const myMSALObj = new msal.PublicClientApplication(msalConfig);
+
+ let username = "";
+
+ function selectAccount () {
+
+ /**
+ * See here for more info on account retrieval:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
+ */
+
+ const currentAccounts = myMSALObj.getAllAccounts();
+
+ if (!currentAccounts || currentAccounts.length < 1) {
+ return;
+ } else if (currentAccounts.length > 1) {
+ // Add your account choosing logic here
+ console.warn("Multiple accounts detected.");
+ } else if (currentAccounts.length === 1) {
+ username = currentAccounts[0].username
+ welcomeUser(currentAccounts[0].username);
+ updateTable(currentAccounts[0]);
+ }
+ }
+
+ function handleResponse(response) {
+
+ /**
+ * To see the full list of response object properties, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response
+ */
+
+ if (response !== null) {
+ username = response.account.username
+ welcomeUser(username);
+ updateTable(response.account);
+ } else {
+ selectAccount();
+ }
+ }
+
+ function signIn() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ myMSALObj.loginPopup(loginRequest)
+ .then(handleResponse)
+ .catch(error => {
+ console.error(error);
+ });
+ }
+
+ function signOut() {
+
+ /**
+ * You can pass a custom request object below. This will override the initial configuration. For more information, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request
+ */
+
+ // Choose which account to logout from by passing a username.
+ const logoutRequest = {
+ account: myMSALObj.getAccountByUsername(username),
+ mainWindowRedirectUri: '/signout'
+ };
+
+ myMSALObj.logoutPopup(logoutRequest);
+ }
+
+ selectAccount();
+ ```
+
+1. Save the file.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Sign in and sign out of the Vanilla JS SPA](./tutorial-single-page-app-vanillajs-sign-in-sign-out.md)
active-directory Tutorial Single Page App Vanillajs Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-app.md
+
+ Title: Tutorial - Prepare a Vanilla JavaScript single-page app (SPA) for authentication in a customer tenant
+description: Learn how to prepare a Vanilla JavaScript single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant.
+++++++++ Last updated : 08/17/2023
+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant.
++
+# Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant
+
+In the [previous article](tutorial-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a Vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Create a Vanilla JavaScript project in Visual Studio Code
+> * Install required packages
+> * Add code to *server.js* to create a server
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md).
+* Although any integrated development environment (IDE) that supports Vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* [Node.js](https://nodejs.org/en/download/).
+
+## Create a new Vanilla JS project and install dependencies
+
+1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project.
+1. Open a new terminal by selecting **Terminal** > **New Terminal**.
+1. Run the following command to create a new Vanilla JS project:
+
+ ```powershell
+ npm init -y
+ ```
+1. Create additional folders and files to achieve the following project structure:
+
+ ```
+ ΓööΓöÇΓöÇ public
+ ΓööΓöÇΓöÇ authConfig.js
+ ΓööΓöÇΓöÇ authPopup.js
+ ΓööΓöÇΓöÇ authRedirect.js
+ ΓööΓöÇΓöÇ claimUtils.js
+ ΓööΓöÇΓöÇ https://docsupdatetracker.net/index.html
+ ΓööΓöÇΓöÇ signout.html
+ ΓööΓöÇΓöÇ styles.css
+ ΓööΓöÇΓöÇ ui.js
+ ΓööΓöÇΓöÇ server.js
+ ```
+
+## Install app dependencies
+
+1. In the **Terminal**, run the following command to install the required dependencies for the project:
+
+ ```powershell
+ npm install express morgan @azure/msal-browser
+ ```
+
+## Edit the *server.js* file
+
+**Express** is a web application framework for **Node.js**. It's used to create a server that hosts the application. **Morgan** is the middleware that logs HTTP requests to the console. The server file is used to host these dependencies and contains the routes for the application. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/).
+
+1. Add the following code snippet to the *server.js* file:
+
+ ```javascript
+ const express = require('express');
+ const morgan = require('morgan');
+ const path = require('path');
+
+ const DEFAULT_PORT = process.env.PORT || 3000;
+
+ // initialize express.
+ const app = express();
+
+ // Configure morgan module to log all requests.
+ app.use(morgan('dev'));
+
+ // serve public assets.
+ app.use(express.static('public'));
+
+ // serve msal-browser module
+ app.use(express.static(path.join(__dirname, "node_modules/@azure/msal-browser/lib")));
+
+ // set up a route for signout.html
+ app.get('/signout', (req, res) => {
+ res.sendFile(path.join(__dirname + '/public/signout.html'));
+ });
+
+ // set up a route for redirect.html
+ app.get('/redirect', (req, res) => {
+ res.sendFile(path.join(__dirname + '/public/redirect.html'));
+ });
+
+ // set up a route for https://docsupdatetracker.net/index.html
+ app.get('/', (req, res) => {
+ res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html'));
+ });
+
+ app.listen(DEFAULT_PORT, () => {
+ console.log(`Sample app listening on port ${DEFAULT_PORT}!`);
+ });
+
+ ```
+
+In this code, the **app** variable is initialized with the **express** module and **express** is used to serve the public assets. **Msal-browser** is served as a static asset and is used to initiate the authentication flow.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure SPA for authentication](tutorial-single-page-app-vanillajs-configure-authentication.md)
active-directory Tutorial Single Page App Vanillajs Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-tenant.md
+
+ Title: Tutorial - Prepare your customer tenant to authenticate users in a Vanilla JavaScript single-page application
+description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a Vanilla JavaScript single-page app (SPA).
+++++++++ Last updated : 08/17/2023
+#Customer intent: As a developer, I want to learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app
+
+This tutorial series demonstrates how to build a Vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Register a SPA in the Microsoft Entra admin center, and record its identifiers
+> * Define the platform and URLs
+> * Grant permissions to the SPA to access the Microsoft Graph API
+> * Create a sign in and sign out user flow in the Microsoft Entra admin center
+> * Associate your SPA with the user flow
+
+## Prerequisites
+
+- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions:
+
+ * Application administrator
+ * Application developer
+ * Cloud application administrator
+
+- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one.
+
+## Register the SPA and record identifiers
++
+## Add a platform redirect URL
++
+## Grant API permissions
++
+## Create a user flow
++
+## Associate the SPA with the user flow
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Prepare your Vanilla JS SPA](tutorial-single-page-app-Vanillajs-prepare-app.md)
active-directory Tutorial Single Page App Vanillajs Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-sign-in-sign-out.md
+
+ Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant
+description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant.
++++++++ Last updated : 08/02/2023
+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant
+
+In the [previous article](tutorial-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface
+> * Add code to the *signout.html* file to create the sign-out page
+> * Sign in and sign out of the application
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Create components for authentication and authorization](tutorial-single-page-app-vanillajs-configure-authentication.md).
+
+## Add code to the *https://docsupdatetracker.net/index.html* file
+
+The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button.
+
+1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet:
+
+ ```html
+ <!DOCTYPE html>
+ <html lang="en">
+
+ <head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
+ <title>Microsoft identity platform</title>
+ <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
+ <link rel="stylesheet" href="./styles.css">
+
+ <!-- adding Bootstrap 5 for UI components -->
+ <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet"
+ integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous">
+
+ <!-- msal.min.js can be used in the place of msal-browser.js -->
+ <script src="/msal-browser.min.js"></script>
+ </head>
+
+ <body>
+ <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle">
+ <a class="navbar-brand" href="/">Microsoft identity platform</a>
+ <div class="navbar-collapse justify-content-end">
+ <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button>
+ <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button>
+ </div>
+ </nav>
+ <br>
+ <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js
+ </h5>
+ <h5 id="welcome-div" class="card-header text-center d-none"></h5>
+ <br>
+ <div class="table-responsive-ms" id="table">
+ <table id="table-div" class="table table-striped d-none">
+ <thead id="table-head-div">
+ <tr>
+ <th>Claim Type</th>
+ <th>Value</th>
+ <th>Description</th>
+ </tr>
+ </thead>
+ <tbody id="table-body-div">
+ </tbody>
+ </table>
+ </div>
+ <!-- importing bootstrap.js and supporting js libraries -->
+ <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"
+ integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous">
+ </script>
+ <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"
+ integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3"
+ crossorigin="anonymous"></script>
+ <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js"
+ integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3"
+ crossorigin="anonymous"></script>
+
+ <!-- importing app scripts (load order is important) -->
+ <script type="text/javascript" src="./authConfig.js"></script>
+ <script type="text/javascript" src="./ui.js"></script>
+ <script type="text/javascript" src="./claimUtils.js"></script>
+ <!-- <script type="text/javascript" src="./authRedirect.js"></script> -->
+ <!-- uncomment the above line and comment the line below if you would like to use the redirect flow -->
+ <script type="text/javascript" src="./authPopup.js"></script>
+ </body>
+
+ </html>
+ ```
+
+1. Save the file.
+
+## Add code to the *claimUtils.js* file
+
+1. Open *public/claimUtils.js* and add the following code snippet:
+
+ ```javascript
+ /**
+ * Populate claims table with appropriate description
+ * @param {Object} claims ID token claims
+ * @returns claimsObject
+ */
+ const createClaimsTable = (claims) => {
+ let claimsObj = {};
+ let index = 0;
+
+ Object.keys(claims).forEach((key) => {
+ if (typeof claims[key] !== 'string' && typeof claims[key] !== 'number') return;
+ switch (key) {
+ case 'aud':
+ populateClaim(
+ key,
+ claims[key],
+ "Identifies the intended recipient of the token. In ID tokens, the audience is your app's Application ID, assigned to your app in the Azure portal.",
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'iss':
+ populateClaim(
+ key,
+ claims[key],
+ 'Identifies the issuer, or authorization server that constructs and returns the token. It also identifies the Azure AD tenant for which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in /v2.0. The GUID that indicates that the user is a consumer user from a Microsoft account is 9188040d-6c67-4c5b-b112-36a304b66dad.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'iat':
+ populateClaim(
+ key,
+ changeDateFormat(claims[key]),
+ 'Issued At indicates when the authentication for this token occurred.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'nbf':
+ populateClaim(
+ key,
+ changeDateFormat(claims[key]),
+ 'The nbf (not before) claim identifies the time (as UNIX timestamp) before which the JWT must not be accepted for processing.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'exp':
+ populateClaim(
+ key,
+ changeDateFormat(claims[key]),
+ "The exp (expiration time) claim identifies the expiration time (as UNIX timestamp) on or after which the JWT must not be accepted for processing. It's important to note that in certain circumstances, a resource may reject the token before this time. For example, if a change in authentication is required or a token revocation has been detected.",
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'name':
+ populateClaim(
+ key,
+ claims[key],
+ "The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory",
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'preferred_username':
+ populateClaim(
+ key,
+ claims[key],
+ 'The primary username that represents the user. It could be an email address, phone number, or a generic username without a specified format. Its value is mutable and might change over time. Since it is mutable, this value must not be used to make authorization decisions. It can be used for username hints, however, and in human-readable UI as a username. The profile scope is required in order to receive this claim.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'nonce':
+ populateClaim(
+ key,
+ claims[key],
+ 'The nonce matches the parameter included in the original /authorize request to the IDP. If it does not match, your application should reject the token.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'oid':
+ populateClaim(
+ key,
+ claims[key],
+ 'The oid (userΓÇÖs object id) is the only claim that should be used to uniquely identify a user in an Azure AD tenant. The token might have one or more of the following claim, that might seem like a unique identifier, but is not and should not be used as such.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'tid':
+ populateClaim(
+ key,
+ claims[key],
+ 'The tenant ID. You will use this claim to ensure that only users from the current Azure AD tenant can access this app.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'upn':
+ populateClaim(
+ key,
+ claims[key],
+ '(user principal name) ΓÇô might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place or might change to reflect a personal change like marriage.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'email':
+ populateClaim(
+ key,
+ claims[key],
+ 'Email might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'acct':
+ populateClaim(
+ key,
+ claims[key],
+ 'Available as an optional claim, it lets you know what the type of user (homed, guest) is. For example, for an individualΓÇÖs access to their data you might not care for this claim, but you would use this along with tenant id (tid) to control access to say a company-wide dashboard to just employees (homed users) and not contractors (guest users).',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'sid':
+ populateClaim(key, claims[key], 'Session ID, used for per-session user sign-out.', index, claimsObj);
+ index++;
+ break;
+ case 'sub':
+ populateClaim(
+ key,
+ claims[key],
+ 'The sub claim is a pairwise identifier - it is unique to a particular application ID. If a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'ver':
+ populateClaim(
+ key,
+ claims[key],
+ 'Version of the token issued by the Microsoft identity platform',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'auth_time':
+ populateClaim(
+ key,
+ claims[key],
+ 'The time at which a user last entered credentials, represented in epoch time. There is no discrimination between that authentication being a fresh sign-in, a single sign-on (SSO) session, or another sign-in type.',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'at_hash':
+ populateClaim(
+ key,
+ claims[key],
+ 'An access token hash included in an ID token only when the token is issued together with an OAuth 2.0 access token. An access token hash can be used to validate the authenticity of an access token',
+ index,
+ claimsObj
+ );
+ index++;
+ break;
+ case 'uti':
+ case 'rh':
+ index++;
+ break;
+ default:
+ populateClaim(key, claims[key], '', index, claimsObj);
+ index++;
+ }
+ });
+
+ return claimsObj;
+ };
+
+ /**
+ * Populates claim, description, and value into an claimsObject
+ * @param {string} claim
+ * @param {string} value
+ * @param {string} description
+ * @param {number} index
+ * @param {Object} claimsObject
+ */
+ const populateClaim = (claim, value, description, index, claimsObject) => {
+ let claimsArray = [];
+ claimsArray[0] = claim;
+ claimsArray[1] = value;
+ claimsArray[2] = description;
+ claimsObject[index] = claimsArray;
+ };
+
+ /**
+ * Transforms Unix timestamp to date and returns a string value of that date
+ * @param {string} date Unix timestamp
+ * @returns
+ */
+ const changeDateFormat = (date) => {
+ let dateObj = new Date(date * 1000);
+ return `${date} - [${dateObj.toString()}]`;
+ };
+ ```
+
+1. Save the file.
+
+## Add code to the *signout.html* file
+
+1. Open *public/signout.html* and add the following code snippet:
+
+ ```html
+ <!DOCTYPE html>
+ <html lang="en">
+ <head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <title>Azure AD | Vanilla JavaScript SPA</title>
+ <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon">
+
+ <!-- adding Bootstrap 4 for UI components -->
+ <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/boot8strap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
+ </head>
+ <body>
+ <div class="jumbotron" style="margin: 10%">
+ <h1>Goodbye!</h1>
+ <p>You have signed out and your cache has been cleared.</p>
+ <a class="btn btn-primary" href="/" role="button">Take me back</a>
+ </div>
+ </body>
+ </html>
+ ```
+
+1. Save the file.
+
+## Add code to the *ui.js* file
+
+When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button.
+
+1. Open *public/ui.js* and add the following code snippet:
+
+ ```javascript
+ // Select DOM elements to work with
+ const signInButton = document.getElementById('signIn');
+ const signOutButton = document.getElementById('signOut');
+ const titleDiv = document.getElementById('title-div');
+ const welcomeDiv = document.getElementById('welcome-div');
+ const tableDiv = document.getElementById('table-div');
+ const tableBody = document.getElementById('table-body-div');
+
+ function welcomeUser(username) {
+ signInButton.classList.add('d-none');
+ signOutButton.classList.remove('d-none');
+ titleDiv.classList.add('d-none');
+ welcomeDiv.classList.remove('d-none');
+ welcomeDiv.innerHTML = `Welcome ${username}!`;
+ };
+
+ function updateTable(account) {
+ tableDiv.classList.remove('d-none');
+
+ const tokenClaims = createClaimsTable(account.idTokenClaims);
+
+ Object.keys(tokenClaims).forEach((key) => {
+ let row = tableBody.insertRow(0);
+ let cell1 = row.insertCell(0);
+ let cell2 = row.insertCell(1);
+ let cell3 = row.insertCell(2);
+ cell1.innerHTML = tokenClaims[key][0];
+ cell2.innerHTML = tokenClaims[key][1];
+ cell3.innerHTML = tokenClaims[key][2];
+ });
+ };
+ ```
+
+1. Save the file.
+
+## Add code to the *styles.css* file
+
+1. Open *public/styles.css* and add the following code snippet:
+
+ ```css
+ .navbarStyle {
+ padding: .5rem 1rem !important;
+ }
+
+ .table-responsive-ms {
+ max-height: 39rem !important;
+ padding-left: 10%;
+ padding-right: 10%;
+ }
+ ```
+
+1. Save the file.
+
+## Run your project and sign in
+
+Now that all the required code snippets have been added, the application can be called and tested in a web browser.
+
+1. Open a new terminal and run the following command to start your express web server.
+ ```powershell
+ npm start
+ ```
+1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`.
+1. Select **No account? Create one**, which starts the sign-up flow.
+1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application.
+1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
+
+ 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.
+
+1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data.
+
+ :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a Vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png":::
+
+## Sign out of the application
+
+1. To sign out of the application, select **Sign out** in the navigation bar.
+1. A window appears asking which account to sign out of.
+1. Upon successful sign out, a final window appears advising you to close all browser windows.
+
+## Next steps
+
+- [Enable self-service password reset](./how-to-enable-password-reset-customers.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md
Title: "What's new in Azure Active Directory for customers" description: "New and updated documentation for the Azure Active Directory for customers documentation." Previously updated : 08/01/2023 Last updated : 08/17/2023
Welcome to what's new in Azure Active Directory for customers documentation. Thi
- [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest - [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](./tutorial-single-page-app-react-sign-in-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling - [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](./tutorial-single-page-app-react-sign-in-sign-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling-- [Tutorial: Handle authentication flows in a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant](how-to-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant](how-to-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling
+- [Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling
+- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling
+- [Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling
+- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling
- [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](tutorial-single-page-app-react-sign-in-prepare-tenant.md) - Fixed SPA aligning content styling - [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](tutorial-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes - [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](tutorial-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes
active-directory Customize Invitation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md
description: Azure Active Directory B2B collaboration supports your cross-compan
+ Last updated 12/02/2022
-# Customer intent: As a tenant administrator, I want to customize the invitation process with the API.
+# Customer intent: As a tenant administrator, I want to customize the invitation process with the API.
# Azure Active Directory B2B collaboration API and customization
Check out the invitation API reference in [https://developer.microsoft.com/graph
- [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add and invite guest users](add-users-administrator.md) - [The elements of the B2B collaboration invitation email](invitation-email-elements.md)-
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Last updated 03/15/2023
-+
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
description: Learn how to enable Active Directory B2B external collaboration and
+ Last updated 10/24/2022
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Last updated 01/20/2023
-+ -
-# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
+# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
# Add Facebook as an identity provider for External Identities
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Last updated 01/20/2023
-+
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
description: If you have internal user accounts for partners, distributors, supp
+ Last updated 07/27/2023
- # Customer intent: As a tenant administrator, I want to know how to invite internal users to B2B collaboration.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Last updated 05/23/2023
tags: active-directory -+
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Last updated 05/18/2023
-+ -
-# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption.
+# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption.
# Properties of an Azure Active Directory B2B collaboration user
active-directory Custom Security Attributes Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md
+ Last updated 06/29/2023
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
+ Last updated 06/29/2023
active-directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-storage-eu.md
Previously updated : 12/13/2022 Last updated : 08/17/2023
The following sections provide information about customer data that doesn't meet
## Services permanently excluded from the EU Data Residency and EU Data Boundary
-* **Reason for customer data egress** - Some forms of communication rely on a network that is operated by global providers, such as phone calls and SMS. Device vendor-specific services such Apple Push Notifications, may be outside of Europe.
+* **Reason for customer data egress** - Some forms of communication, such as phone calls or text messaging platforms like SMS, RCS, or WhatsApp, rely on a network that is operated by global providers. Device vendor-specific services, such as push notifications from Apple or Google, may be outside of Europe.
* **Types of customer data being egressed** - User account data (phone number). * **Customer data location at rest** - In EU Data Boundary. * **Customer data processing** - Some processing may occur globally.
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
+ Previously updated : 07/11/2023 Last updated : 08/15/2023 - # Customer intent: As a new or existing customer, I want to learn more about the new name for Azure Active Directory (Azure AD) and understand the impact the name change may have on other products, new or existing license(s), what I need to do, and where I can learn more about Microsoft Entra products. # New name for Azure Active Directory
-To unify the [Microsoft Entra](/entra) product family, reflect the progression to modern multicloud identity security, and simplify secure access experiences for all, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID.
+To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID.
-## No action is required from you
+## No interruptions to usage or service
If you're using Azure AD today or are currently deploying Azure AD in your organizations, you can continue to use the service without interruption. All existing deployments, configurations, and integrations will continue to function as they do today without any action from you. You can continue to use familiar Azure AD capabilities that you can access through the Azure portal, Microsoft 365 admin center, and the [Microsoft Entra admin center](https://entra.microsoft.com).
-## Only the name is changing
- All features and capabilities are still available in the product. Licensing, terms, service-level agreements, product certifications, support and pricing remain the same.
+To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
+ Service plan display names will change on October 1, 2023. Microsoft Entra ID Free, Microsoft Entra ID P1, and Microsoft Entra ID P2 will be the new names of standalone offers, and all capabilities included in the current Azure AD plans remain the same. Microsoft Entra ID ΓÇô currently known as Azure AD ΓÇô will continue to be included in Microsoft 365 licensing plans, including Microsoft 365 E3 and Microsoft 365 E5. Details on pricing and whatΓÇÖs included are available on the [pricing and free trials page](https://aka.ms/PricingEntra). :::image type="content" source="./media/new-name/azure-ad-new-name.png" alt-text="Diagram showing the new name for Azure AD and Azure AD External Identities." border="false" lightbox="./media/new-name/azure-ad-new-name-high-res.png"::: During 2023, you may see both the current Azure AD name and the new Microsoft Entra ID name in support area paths. For self-service support, look for the topic path of "Microsoft Entra" or "Azure Active Directory/Microsoft Entra ID."
-## Identity developer and devops experiences aren't impacted by the rename
+## Guide to Azure AD name changes and exceptions
-To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
+We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to update your experiences and use the new name by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces.
-Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts.
+### Product name
-Naming is also not changing for:
+Microsoft Entra ID is the new name for Azure AD. Please replace the product names Azure Active Directory, Azure AD, and AAD with Microsoft Entra ID.
-- [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Use to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API.-- [Microsoft Graph](/graph) - Get programmatic access to organizations, user, and application data stored in Microsoft Entra ID.-- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph.-- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as "Active Directory," and all related Windows Server identity services associated with Active Directory.-- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name "Active Directory" or any corresponding features.-- [Azure Active Directory B2C](../../active-directory-b2c/index.yml) will continue to be available as an Azure service.-- [Any deprecated or retired functionality, feature, or service](what-is-deprecated.md) of Azure AD.
+- Microsoft Entra is the name for the product family of identity and network access solutions.
+- Microsoft Entra ID is one of the products within that family.
+- Acronym usage is not encouraged, but if you must replace AAD with an acronym due to space limitations, please use ME-ID.
+
+### Logo/icon
+
+Please change the Azure AD product icon in your experiences. The Azure AD icons are now at end-of-life.
+
+| **Azure AD product icons** | **Microsoft Entra ID product icon** |
+|:--:|:--:|
+| ![Azure AD product icon](./media/new-name/azure-ad-icon-1.png) ![Alternative Azure AD product icon](./media/new-name/azure-ad-icon-2.png) | ![Microsoft Entra ID product icon](./media/new-name/microsoft-entra-id-icon.png) |
+
+You can download the new Microsoft Entra ID icon here: [Microsoft Entra architecture icons](../architecture/architecture-icons.md)
+
+### Feature names
+
+Capabilities or services formerly known as "Azure Active Directory &lt;feature name&gt;" or "Azure AD &lt;feature name&gt;" will be branded as Microsoft Entra product family features. This is done across our portfolio to avoid naming length and complexity, and because many features work across all the products. For example:
+
+- "Azure AD Conditional Access" is now "Microsoft Entra Conditional Access"
+- "Azure AD single sign-on" is now "Microsoft Entra single sign-on"
+
+See the [Glossary of updated terminology](#glossary-of-updated-terminology) later in this article for more examples.
+
+### Exceptions and clarifications to the Azure AD name change
+
+Names aren't changing for Active Directory, developer tools, Azure AD B2C, nor deprecated or retired functionality, features, or services.
+
+Don't rename the following features, functionality, or services.
+
+#### Azure AD renaming exceptions and clarifications
+
+| **Correct terminology** | **Details** |
+|-|-|
+| Active Directory <br/><br/>&#8226; Windows Server Active Directory <br/>&#8226; Active Directory Federation Services (AD FS) <br/>&#8226; Active Directory Domain Services (AD DS) <br/>&#8226; Active Directory <br/>&#8226; Any Active Directory feature(s) | Windows Server Active Directory, commonly known as Active Directory, and related features and services associated with Active Directory aren't branded with Microsoft Entra. |
+| Authentication library <br/><br/>&#8226; Azure AD Authentication Library (ADAL) <br/>&#8226; Microsoft Authentication Library (MSAL) | Azure Active Directory Authentication Library (ADAL) is deprecated. While existing apps that use ADAL will continue to work, Microsoft will no longer release security fixes on ADAL. Migrate applications to the Microsoft Authentication Library (MSAL) to avoid putting your app's security at risk. <br/><br/>[Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Provides security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. |
+| B2C <br/><br/>&#8226; Azure Active Directory B2C <br/>&#8226; Azure AD B2C | [Azure Active Directory B2C](/azure/active-directory-b2c) isn't being renamed. Microsoft Entra External ID for customers is Microsoft's new customer identity and access management (CIAM) solution. |
+| Graph <br/><br/>&#8226; Azure Active Directory Graph <br/>&#8226; Azure AD Graph <br/>&#8226; Microsoft Graph | Azure Active Directory (Azure AD) Graph is deprecated. Going forward, we will make no further investment in Azure AD Graph, and Azure AD Graph APIs have no SLA or maintenance commitment beyond security-related fixes. Investments in new features and functionalities will only be made in Microsoft Graph.<br/><br/>[Microsoft Graph](/graph) - Grants programmatic access to organization, user, and application data stored in Microsoft Entra ID. |
+| PowerShell <br/><br/>&#8226; Azure Active Directory PowerShell <br/>&#8226; Azure AD PowerShell <br/>&#8226; Microsoft Graph PowerShell | Azure AD PowerShell for Graph is planned for deprecation on March 30, 2024. For more info on the deprecation plans, see the deprecation update. We encourage you to migrate to Microsoft Graph PowerShell, which is the recommended module for interacting with Azure AD. <br/><br/>[Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph. |
+| Accounts <br/><br/>&#8226; Microsoft account <br/>&#8226; Work or school account | For end user sign-ins and account experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md). |
+| Microsoft identity platform | The Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts. |
+
+## Glossary of updated terminology
+
+Features of the identity and network access products are attributed to Microsoft EntraΓÇöthe product family, not the individual product name.
+
+You're not required to use the Microsoft Entra attribution with features. Only use if needed to clarify whether you're talking about a concept versus the feature in a specific product, or when comparing a Microsoft Entra feature with a competing feature.
+
+Only official product names are capitalized, plus Conditional Access and My * apps.
+
+| **Category** | **Old terminology** | **Correct name as of July 2023** |
+|-||-|
+| **Microsoft Entra product family** | Microsoft Azure Active Directory<br/> Azure Active Directory<br/> Azure Active Directory (Azure AD)<br/> Azure AD<br/> AAD | Microsoft Entra ID<br/> (Second use: Microsoft Entra ID is preferred, ID is acceptable in product/UI experiences, ME-ID if abbreviation is necessary) |
+| | Azure Active Directory External Identities<br/> Azure AD External Identities | Microsoft Entra External ID<br/> (Second use: External ID) |
+| | Azure Active Directory Identity Governance<br/> Azure AD Identity Governance<br/> Microsoft Entra Identity Governance | Microsoft Entra ID Governance<br/> (Second use: ID Governance) |
+| | *New* | Microsoft Entra Internet Access<br/> (Second use: Internet Access) |
+| | Cloud Knox | Microsoft Entra Permissions Management<br/> (Second use: Permissions Management) |
+| | *New* | Microsoft Entra Private Access<br/> (Second use: Private Access) |
+| | Azure Active Directory Verifiable Credentials<br/> Azure AD Verifiable Credentials | Microsoft Entra Verified ID<br/> (Second use: Verified ID) |
+| | Azure Active Directory Workload Identities<br/> Azure AD Workload Identities | Microsoft Entra Workload ID<br/> (Second use: Workload ID) |
+| | Azure Active Directory Domain Services<br/> Azure AD Domain Services | Microsoft Entra Domain Services<br/> (Second use: Domain Services) |
+| **Microsoft Entra ID SKUs** | Azure Active Directory Premium P1 | Microsoft Entra ID P1 |
+| | Azure Active Directory Premium P1 for faculty | Microsoft Entra ID P1 for faculty |
+| | Azure Active Directory Premium P1 for students | Microsoft Entra ID P1 for students |
+| | Azure Active Directory Premium P1 for government | Microsoft Entra ID P1 for government |
+| | Azure Active Directory Premium P2 | Microsoft Entra ID P2 |
+| | Azure Active Directory Premium P2 for faculty | Microsoft Entra ID P2 for faculty |
+| | Azure Active Directory Premium P2 for students | Microsoft Entra ID P2 for students |
+| | Azure Active Directory Premium P2 for government | Microsoft Entra ID P2 for government |
+| | Azure Active Directory Premium F2 | Microsoft Entra ID F2 |
+| **Microsoft Entra ID service plans** | Azure Active Directory Free | Microsoft Entra ID Free |
+| | Azure Active Directory Premium P1 | Microsoft Entra ID P1 |
+| | Azure Active Directory Premium P2 | Microsoft Entra ID P2 |
+| | Azure Active Directory for education | Microsoft Entra ID for education |
+| **Features and functionality** | Azure AD access token authentication<br/> Azure Active Directory access token authentication | Microsoft Entra access token authenticationΓÇ»|
+| | Azure AD account<br/> Azure Active Directory account | Microsoft Entra account<br/><br/> This terminology is only used with IT admins and developers. End users authenticate with a work or school account. |
+| | Azure AD activity logs<br/> Azure AD audit log | Microsoft Entra activity logs |
+| | Azure AD admin<br/> Azure Active Directory admin | Microsoft Entra admin |
+| | Azure AD admin center<br/> Azure Active Directory admin center | Replace with Microsoft Entra admin center and update link to entra.microsoft.com |
+| | Azure AD application proxy<br/> Azure Active Directory application proxy | Microsoft Entra application proxy |
+| | Azure AD authentication<br/> authenticate with an Azure AD identity<br/> authenticate with Azure AD<br/> authentication to Azure AD | Microsoft Entra authentication<br/> authenticate with a Microsoft Entra identity<br/> authenticate with Microsoft Entra<br/> authentication to Microsoft Entra<br/><br/> This terminology is only used with administrators. End users authenticate with a work or school account. |
+| | Azure AD B2B<br/> Azure Active Directory B2B | Microsoft Entra B2B |
+| | Azure AD built-in roles<br/> Azure Active Directory built-in roles | Microsoft Entra built-in roles |
+| | Azure AD Conditional Access<br/> Azure Active Directory Conditional Access | Microsoft Entra Conditional Access<br/> (Second use: Conditional Access) |
+| | Azure AD cloud-only identities<br/> Azure Active Directory cloud-only identities | Microsoft Entra cloud-only identities |
+| | Azure AD Connect<br/> Azure Active Directory Connect | Microsoft Entra Connect |
+| | Azure AD Connect Sync<br/> Azure Active Directory Connect Sync | Microsoft Entra Connect Sync |
+| | Azure AD domain<br/> Azure Active Directory domain | Microsoft Entra domain |
+| | Azure AD Domain Services<br/> Azure Active Directory Domain Services | Microsoft Entra Domain Services |
+| | Azure AD enterprise application<br/> Azure Active Directory enterprise application | Microsoft Entra enterprise application |
+| | Azure AD federation services<br/> Azure Active Directory federation services | Active Directory Federation Services |
+| | Azure AD groups<br/> Azure Active Directory groups | Microsoft Entra groups |
+| | Azure AD hybrid identities<br/> Azure Active Directory hybrid identities | Microsoft Entra hybrid identities |
+| | Azure AD identities<br/> Azure Active Directory identities | Microsoft Entra identities |
+| | Azure AD identity protection<br/> Azure Active Directory identity protection | Microsoft Entra ID Protection |
+| | Azure AD integrated authentication<br/> Azure Active Directory integrated authentication | Microsoft Entra integrated authentication |
+| | Azure AD join<br/> Azure AD joined<br/> Azure Active Directory join<br/> Azure Active Directory joined | Microsoft Entra join<br/> Microsoft Entra joined |
+| | Azure AD login<br/> Azure Active Directory login | Microsoft Entra login |
+| | Azure AD managed identities<br/> Azure Active Directory managed identities | Microsoft Entra managed identities |
+| | Azure AD multifactor authentication (MFA)<br/> Azure Active Directory multifactor authentication (MFA) | Microsoft Entra multifactor authentication (MFA)<br/> (Second use: MFA) |
+| | Azure AD OAuth and OpenID Connect<br/> Azure Active Directory OAuth and OpenID Connect | Microsoft Entra ID OAuth and OpenID Connect |
+| | Azure AD object<br/> Azure Active Directory object | Microsoft Entra object |
+| | Azure Active Directory-only authentication<br/> Azure AD-only authentication | Microsoft Entra-only authentication |
+| | Azure AD pass-through authentication (PTA)<br/> Azure Active Directory pass-through authentication (PTA) | Microsoft Entra pass-through authentication |
+| | Azure AD password authentication<br/> Azure Active Directory password authentication | Microsoft Entra password authentication |
+| | Azure AD password hash synchronization (PHS)<br/> Azure Active Directory password hash synchronization (PHS) | Microsoft Entra password hash synchronization |
+| | Azure AD password protection<br/> Azure Active Directory password protection | Microsoft Entra password protection |
+| | Azure AD principal ID<br/> Azure Active Directory principal ID | Microsoft Entra principal ID |
+| | Azure AD Privileged Identity Management (PIM)<br/> Azure Active Directory Privileged Identity Management (PIM) | Microsoft Entra Privileged Identity Management (PIM) |
+| | Azure AD registered<br/> Azure Active Directory registered | Microsoft Entra registered |
+| | Azure AD reporting and monitoring<br/> Azure Active Directory reporting and monitoring | Microsoft Entra reporting and monitoring |
+| | Azure AD role<br/> Azure Active Directory role | Microsoft Entra role |
+| | Azure AD schema<br/> Azure Active Directory schema | Microsoft Entra schema |
+| | Azure AD Seamless single sign-on (SSO)<br/> Azure Active Directory Seamless single sign-on (SSO) | Microsoft Entra seamless single sign-on (SSO)<br/> (Second use: SSO) |
+| | Azure AD self-service password reset (SSPR)<br/> Azure Active Directory self-service password reset (SSPR) | Microsoft Entra self-service password reset (SSPR) |
+| | Azure AD service principal<br/> Azure Active Directory service principal | Microsoft Entra service principal |
+| | Azure AD Sync<br/> Azure Active Directory Sync | Microsoft Entra Sync |
+| | Azure AD tenant<br/> Azure Active Directory tenant | Microsoft Entra tenant |
+| | Create a user in Azure AD<br/> Create a user in Azure Active Directory | Create a user in Microsoft Entra |
+| | Federated with Azure AD<br/> Federated with Azure Active Directory | Federated with Microsoft Entra |
+| | Hybrid Azure AD Join<br/> Hybrid Azure AD Joined | Microsoft Entra hybrid join<br/> Microsoft Entra hybrid joined |
+| | Managed identities in Azure AD for Azure SQL | Managed identities in Microsoft Entra for Azure SQL |
+| **Acronym usage** | AAD | ME-ID<br/><br/> Note that this isn't an official abbreviation for the product but may be used in code or when absolute shortest form is required. |
## Frequently asked questions ### When is the name change happening?
-The name change will start appearing across Microsoft experiences after a 30-day notification period, which started July 11, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences to be completed by the end of 2023.
+The name change will appear across Microsoft experiences starting August 15, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences and partner experiences to be completed by the end of 2023.
### Why is the name being changed?
No, only the name Azure AD is going away. Capabilities remain the same.
### What will happen to the Azure AD capabilities and features like App Gallery or Conditional Access?
+All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption.
+ The naming of features changes to Microsoft Entra. For example: - Azure AD tenant -> Microsoft Entra tenant - Azure AD account -> Microsoft Entra account-- Azure AD joined -> Microsoft Entra joined-- Azure AD Conditional Access -> Microsoft Entra Conditional Access
-All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption.
+See the [Glossary of updated terminology](#glossary-of-updated-terminology) for more examples.
### Are licenses changing? Are there any changes to pricing?
There are no changes to the identity features and functionality available in Mic
In addition to the capabilities they already have, Microsoft 365 E5 customers will also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra P2, currently known as Azure AD Premium P2.
-### How and when are customers being notified?
-
-The name changes are publicly announced as of July 11, 2023.
-
-Banners, alerts, and message center posts will notify users of the name change. These will be displayed on the tenant overview page, portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn.
-
-### What if I use the Azure AD name in my content or app?
-
-We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the following section ([Azure AD name changes and exceptions](#azure-ad-name-changes-and-exceptions)) to make the name change in your content and product experiences by the end of 2023.
-
-## Azure AD name changes and exceptions
-
-We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to stay current with the new naming guidance by updating copy by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces.
-
-### Product name
-
-Replace the product name "Azure Active Directory" or "Azure AD" or "AAD" with Microsoft Entra ID.
+### What's changing for identity developer and devops experience?
-*Microsoft Entra* is the correct name for the family of identity and network access solutions, one of which is *Microsoft Entra ID.*
+Identity developer and devops experiences aren't being renamed. To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling.
-### Logo/icon
+Many technical components either have low visibility to customers (for example, sign-in URLs), or usually aren't branded, like APIs.
-Azure AD is becoming Microsoft Entra ID, and the product icon is also being updated. Work with your Microsoft partner organization to obtain the new product icon.
-
-### Feature names
+Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts.
-Capabilities or services formerly known as "Azure Active Directory &lt;feature name&gt;" or "Azure AD &lt;feature name&gt;" will be branded as Microsoft Entra product family features. For example:
+Naming is also not changing for:
-- "Azure AD Conditional Access" is becoming "Microsoft Entra Conditional Access"-- "Azure AD single sign-on" is becoming "Microsoft Entra single sign-on"-- "Azure AD tenant" is becoming "Microsoft Entra tenant"
+- [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview) ΓÇô Acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API.
+- [Microsoft Graph](/graph) ΓÇô Get programmatic access to organizational, user, and application data stored in Microsoft Entra ID.
+- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) ΓÇô Acts as an API wrapper for the Microsoft Graph APIs; helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph.
+- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as ΓÇ£Active DirectoryΓÇ¥, and all related Windows Server identity services, associated with Active Directory.
+- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name ΓÇ£Active DirectoryΓÇ¥ or any corresponding features.
+- [Azure Active Directory B2C](/azure/active-directory-b2c) will continue to be available as an Azure service.
+- Any deprecated or retired functionality, feature, or service of Azure Active Directory.
-### Exceptions to Azure AD name change
+### How and when are customers being notified?
-Products or features that are being deprecated aren't being renamed. These products or features include:
+The name changes were publicly announced on July 11, 2023.
-- Azure AD Authentication Library (ADAL), replaced by [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md)-- Azure AD Graph, replaced by [Microsoft Graph](/graph)-- Azure Active Directory PowerShell for Graph (Azure AD PowerShell), replaced by [Microsoft Graph PowerShell](/powershell/microsoftgraph)
+Banners, alerts, and message center posts notified users of the name change. The change was also displayed on the tenant overview page in the portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn.
-Names that don't have "Azure AD" also aren't changing. These products or features include Active Directory Federation Services (AD FS), Microsoft identity platform, and Windows Server Active Directory Domain Services (AD DS).
+### What if I use the Azure AD name in my content or app?
-End users shouldn't be exposed to the Azure AD or Microsoft Entra ID name. For sign-ins and account user experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md).
+We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the ([Glossary of updated terminology](#glossary-of-updated-terminology)) to make the name change in your content and product experiences by the end of 2023.
## Next steps
active-directory Scenario Azure First Sap Identity Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md
This document provides advice on the **technical design and configuration** of S
| [IDS](https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/d6a8db70bdde459f92f2837349f95090.html) | SAP ID Service. An instance of IAS used by SAP to authenticate customers and partners to SAP-operated PaaS and SaaS services. | | [IPS](https://help.sap.com/viewer/f48e822d6d484fa5ade7dda78b64d9f5/Cloud/en-US/2d2685d469a54a56b886105a06ccdae6.html) | SAP Cloud Identity Services - Identity Provisioning Service. IPS helps to synchronize identities between different stores / target systems. | | [XSUAA](https://blogs.sap.com/2019/01/07/uaa-xsuaa-platform-uaa-cfuaa-what-is-it-all-about/) | Extended Services for Cloud Foundry User Account and Authentication. XSUAA is a multi-tenant OAuth authorization server within the SAP BTP. |
-| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multi-cloud offering for BTP (AWS, Azure, GCP, Alibaba). |
+| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multicloud offering for BTP (AWS, Azure, GCP, Alibaba). |
| [Fiori](https://www.sap.com/products/fiori.html) | The web-based user experience of SAP (as opposed to the desktop-based experience). | ## Overview
Regardless of where the authorization information comes from, it can then be emi
## Next Steps - Learn more about the initial setup in [this tutorial](../saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md)-- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#azure-ad) and beyond
+- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#microsoft-entra-id-formerly-azure-ad) and beyond
active-directory Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md
description: Get protected from common identity threats using Azure AD security
+ Last updated 07/31/2023
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Last updated 01/27/2023 --+ # What's deprecated in Azure Active Directory?
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Last updated 7/18/2023 -+
For more information on how to enable this feature, see: [Cloud Sync directory e
**Service category:** Audit **Product capability:** Monitoring & Reporting
-This feature analyzes uploaded client-side logs, also known as diagnostic logs, from a Windows 10+ device that is having an issue(s) and suggests remediation steps to resolve the issue(s). Admins can work with end user to collect client-side logs, and then upload them to this troubleshooter in the Entra Portal. For more information, see: [Troubleshooting Windows devices in Azure AD](../devices/troubleshoot-device-windows-joined.md).
+This feature analyzes uploaded client-side logs, also known as diagnostic logs, from a Windows 10+ device that is having an issue(s) and suggests remediation steps to resolve the issue(s). Admins can work with end user to collect client-side logs, and then upload them to this troubleshooter in the Microsoft Entra admin center. For more information, see: [Troubleshooting Windows devices in Azure AD](../devices/troubleshoot-device-windows-joined.md).
The ability for users to create tenants from the Manage Tenant overview has been
**Service category:** My Apps **Product capability:** End User Experiences
-We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
+We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Microsoft Entra admin centers. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
Customers can now meet their complex audit and recertification requirements thro
Currently, users can self-service leave for an organization without the visibility of their IT administrators. Some organizations may want more control over this self-service process.
-With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra portal. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties.
+With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra admin center. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties.
A new policy API is available for the administrators to control tenant wide policy: [externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta&preserve-view=true)
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
In the **All Devices** settings under the Registered column, you can now select
**Service category:** My Apps **Product capability:** End User Experiences
-We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
+We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Microsoft Entra admin centers. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Last updated 05/31/2023 -+
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
# Change approval and requestor information settings for an access package in entitlement management
-Each access package must have one or more access package assignment policies, before a user can be assigned access. When an access package is created in the Entra portal, the Entra portal automatically creates the first access package assignment policy for that access package. The policy determines who can request access, and who if anyone must approve access.
+Each access package must have one or more access package assignment policies, before a user can be assigned access. When an access package is created in the Microsoft Entra admin center, the Microsoft Entra admin center automatically creates the first access package assignment policy for that access package. The policy determines who can request access, and who if anyone must approve access.
As an access package manager, you can change the approval and requestor information settings for an access package at any time by editing an existing policy or adding a new additional policy for requesting access.
active-directory Entitlement Management Custom Teams Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-custom-teams-extension.md
Prerequisite roles: Global administrator, Identity Governance administrator, or
To create a Logic App and custom extension in a catalog, you'd follow these steps:
-1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate To Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
-
+
Title: Govern access for external users in entitlement management description: Learn about the settings you can specify to govern access for external users in entitlement management.
The following diagram and steps provide an overview of how external users are gr
1. If the policy settings include an expiration date, then later when the access package assignment for the external user expires, the external user's access rights from that access package are removed.
-1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user is blocked from signing in and the guest user account is removed from your directory.
+1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user will be blocked from signing in, and the external user account will be removed from your directory.
## Settings for external users
To ensure people outside of your organization can request access packages and ge
### Review your Microsoft 365 group sharing settings -- If you want to include Microsoft 365 groups in your access packages for external users, make sure the **Let users add new guests to the organization** is set to **On** to allow guest access. For more information, see [Manage guest access to Microsoft 365 Groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups?view=microsoft-365-worldwide#manage-groups-guest-access).
+- If you want to include Microsoft 365 groups in your access packages for external users, make sure the **Let users add new guests to the organization** is set to **On** to allow guest access. For more information, see [Manage guest access to Microsoft 365 Groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups#manage-groups-guest-access).
- If you want external users to be able to access the SharePoint Online site and resources associated with a Microsoft 365 group, make sure you turn on SharePoint Online external sharing. For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting).
To ensure people outside of your organization can request access packages and ge
## Manage the lifecycle of external users
-You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they're blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory.
+You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they're blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory. You can also configure that an external user is not blocked from sign in or deleted, or that an external user is not blocked from sign in but is deleted (preview).
**Prerequisite role:** Global administrator, Identity Governance administrator or User administrator
You can select what happens when an external user, who was invited to your direc
1. Once an external user loses their last assignment to any access packages, if you want to block them from signing in to this directory, set the **Block external user from signing in to this directory** to **Yes**. > [!NOTE]
- > If a user is blocked from signing in to this directory, then the user will be unable to re-request the access package or request additional access in this directory. Do not configure blocking them from signing in if they will subsequently need to request access to other access packages.
+ > Entitlement management only blocks external guest user accounts from signing in that were invited through entitlement management or that were added to entitlement management for lifecycle management. Also, note that a user will be blocked from signing in even if that user was added to resources in this directory that were not access package assignments. If a user is blocked from signing in to this directory, then the user will be unable to re-request the access package or request additional access in this directory. Do not configure blocking them from signing in if they will subsequently need to request access to this or other access packages.
1. Once an external user loses their last assignment to any access packages, if you want to remove their guest user account in this directory, set **Remove external user** to **Yes**. > [!NOTE]
- > Entitlement management only removes accounts that were invited through entitlement management. Also, note that a user will be blocked from signing in and removed from this directory even if that user was added to resources in this directory that were not access package assignments. If the guest was present in this directory prior to receiving access package assignments, they will remain. However, if the guest was invited through an access package assignment, and after being invited was also assigned to a OneDrive for Business or SharePoint Online site, they will still be removed.
+ > Entitlement management only removes external guest user accounts that were invited through entitlement management or that were added to entitlement management for lifecycle managementh. Also, note that a user will be removed from this directory even if that user was added to resources in this directory that were not access package assignments. If the guest was present in this directory prior to receiving access package assignments, they will remain. However, if the guest was invited through an access package assignment, and after being invited was also assigned to a OneDrive for Business or SharePoint Online site, they will still be removed.
-1. If you want to remove the guest user account in this directory, you can set the number of days before it's removed. If you want to remove the guest user account as soon as they lose their last assignment to any access packages, set **Number of days before removing external user from this directory** to **0**.
+1. If you want to remove the guest user account in this directory, you can set the number of days before it's removed. While an external user is notified when their access package expires, there is no notification when their account is removed. If you want to remove the guest user account as soon as they lose their last assignment to any access packages, set **Number of days before removing external user from this directory** to **0**.
1. Select **Save**.
active-directory Entitlement Management Ticketed Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md
To add a Logic App workflow to an existing catalog, you use an ARM template for
Provide the Azure subscription, resource group details, along with the Logic App name and the Catalog ID to associate the Logic App with and select purchase. For more information on how to create a new catalog, please follow the steps in this document: [Create and manage a catalog of resources in entitlement management](entitlement-management-catalog-create.md).
-1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate To Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
After registering your application, you must add a client secret by following th
To authorize the created application to call the [MS Graph resume API](/graph/api/accesspackageassignmentrequest-resume) you'd do the following steps:
-1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
+1. Navigate to the Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement)
1. In the left menu, select **Catalogs**.
active-directory Tutorial Prepare User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-user-accounts.md
Last updated 08/02/2023 -+ # Preparing user accounts for Lifecycle workflows tutorials
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
For a detailed guide on setting the execution conditions for a workflow, see: [C
While newly created workflows are enabled by default, scheduling is an option that must be enabled manually. To verify whether the workflow is scheduled, you can view the **Scheduled** column.
-Once scheduling is enabled, the workflow is evaluated every three hours to determine whether or not it should run based on the execution conditions.
+Once scheduling is enabled, the workflow is evaluated based on the interval that is set within your workflow settings(default of three hours) to determine whether or not it should run based on the execution conditions.
[![Workflow template schedule.](media/understanding-lifecycle-workflows/workflow-10.png)](media/understanding-lifecycle-workflows/workflow-10.png#lightbox)
active-directory Custom Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/custom-attribute-mapping.md
-+ Last updated 01/12/2023
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md
+ Last updated 01/11/2023
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-install.md
To update an existing agent to use the Group Managed Service Account created dur
>[!IMPORTANT] > After you've installed the agent, you must configure and enable it before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
-## Enable password writeback in Azure AD Connect cloud sync
++
+## Enable password writeback in cloud sync
+
+You can enable password writeback in SSPR directly in the portal or through PowerShell.
+
+### Enable password writeback in the portal
+To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, using the portal, complete the following steps:
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account.
+ 2. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
+ 3. Check the option for **Enable password write back for synced users** .
+ 4. (optional) If Azure AD Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Azure AD Connect cloud sync**.
+ 5. Check the option for **Allow users to unlock accounts without resetting their password** to *Yes*.
+ 6. When ready, select **Save**.
+
+### Using PowerShell
To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, use the `Set-AADCloudSyncPasswordWritebackConfiguration` cmdlet and the tenantΓÇÖs global administrator credentials:
active-directory Migrate Azure Ad Connect To Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/migrate-azure-ad-connect-to-cloud-sync.md
+ Last updated 01/17/2023
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/reference-powershell.md
+ Last updated 01/17/2023
active-directory How To Bypassdirsyncoverrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-bypassdirsyncoverrides.md
+
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-emergency-ad-fs-certificate-rotation.md
+ Last updated 01/26/2023
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md
ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be
na+ Last updated 01/26/2023
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-saml-idp.md
description: This document describes using a SAML 2.0 compliant Idp for single s
-+ na
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-single-adfs-multitenant-federation.md
ms.assetid:
na+ Last updated 01/26/2023
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-tenant.md
description: This topic describes how to use Connect when you have an existing A
+ Last updated 01/26/2023
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-multiple-domains.md
ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4
na+ Last updated 01/26/2023
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md
ms.assetid: 91b88fda-bca6-49a8-898f-8d906a661f07
na+ Last updated 05/02/2023
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md
ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501 + Last updated 05/18/2023
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md
ms.assetid: 7b9df836-e8a5-4228-97da-2faec9238b31 + Last updated 01/26/2023
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-preferreddatalocation.md
description: Describes how to put your Microsoft 365 user resources close to the
+ Last updated 01/26/2023
active-directory How To Connect Syncservice Duplicate Attribute Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-duplicate-attribute-resiliency.md
ms.assetid: 537a92b7-7a84-4c89-88b0-9bce0eacd931
na+ Last updated 01/26/2023
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md
ms.assetid: 213aab20-0a61-434a-9545-c4637628da81
na+ Last updated 01/26/2023
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/migrate-from-federation-to-cloud-authentication.md
description: This article has information about moving your hybrid identity envi
+ Last updated 04/04/2023
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-accounts-permissions.md
na+ Last updated 01/19/2023
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsynctools.md
-+ # Azure AD Connect: ADSyncTools PowerShell Reference
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history-archive.md
Last updated 01/19/2023
-+ # Azure AD Connect: Version release history archive
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md
Last updated 7/6/2022 -+
To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to
- We have enabled Auto Upgrade for tenants with custom synchronization rules. Note that deleted (not disabled) default rules will be re-created and enabled upon Auto Upgrade. - We have added Microsoft Azure AD Connect Agent Updater service to the install. This new service will be used for future auto upgrades. - We have removed the Synchronization Service WebService Connector Config program from the install.
+ - Default sync rule ΓÇ£In from AD ΓÇô User CommonΓÇ¥ was updated to flow the employeeType attribute.
### Bug Fixes - We have made improvements to accessibility.
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-connectivity.md
-+ # Troubleshoot Azure AD Connect connectivity issues
active-directory Tshoot Connect Object Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-object-not-syncing.md
ms.assetid:
na+ Last updated 01/19/2023
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sso.md
ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7 + Last updated 01/19/2023
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sync-errors.md
Last updated 01/19/2023 -+
active-directory Verify Sync Tool Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/verify-sync-tool-version.md
+
+ Title: 'Verify your version of cloud sync or connect sync'
+description: This article describes the steps to verify the version of the provisioning agent or connect sync.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ Last updated : 08/17/2023+++++
+# Verify your version of the provisioning agent or connect sync
+This article describes the steps to verify the installed version of the provisioning agent and connect sync.
+
+## Verify the provisioning agent
+To see what version of the provisioning agent your using, use the following steps:
++
+## Verfiy connect sync
+To see what version of connect sync your using, use the following steps:
+
+### On the local server
+
+To verify that the agent is running, follow these steps:
+
+ 1. Sign in to the server with an administrator account.
+ 2. Open **Services** either by navigating to it or by going to *Start/Run/Services.msc*.
+ 3. Under **Services**, make sure that **Microsoft Azure AD Sync** is present and the status is **Running**.
++
+### Verify the connect sync version
+
+To verify that the version of the agent running, follow these steps:
+
+1. Navigate to 'C:\Program Files\Microsoft Azure AD Connect'
+2. Right-click on **AzureADConnect.exe** and select **properties**.
+3. Click the **details** tab and the version number ID next to the Product version.
+
+## Next steps
+- [Common scenarios](common-scenarios.md)
+- [Choosing the right sync tool](https://setup.microsoft.com/azure/add-or-sync-users-to-azure-ad)
+- [Steps to start](get-started.md)
+- [Prerequisites](prerequisites.md)
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
- # Securing workload identities Azure AD Identity Protection has historically protected users in detecting, investigating, and remediating identity-based risks. We're now extending these capabilities to workload identities to protect applications and service principals.
To make use of workload identity risk, including the new **Risky workload identi
- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal. - One of the following administrator roles assigned
- - Global Administrator
- Security Administrator - Security Operator - Security Reader Users assigned the Conditional Access administrator role can create policies that use risk as a condition.
+ - Global Administrator
## Workload identity risk detections
We detect risk on workload identities across sign-in behavior and offline indica
Organizations can find workload identities that have been flagged for risk in one of two locations:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Reader](../roles/permissions-reference.md#security-reader).
1. Browse to **Azure Active Directory** > **Security** > **Risky workload identities**. 1. Or browse to **Azure Active Directory** > **Security** > **Risk detections**. 1. Select the **Workload identity detections** tab.'
For improved security and resilience of your workload identities, Continuous Acc
## Investigate risky workload identities
-Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal.
+Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis.
Some of the key questions to answer during your investigation include:
The [Azure Active Directory security operations guide for Applications](../archi
Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk, or confirm the account as compromised in the Risky workload identities report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins. ## Remediate risky workload identities
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
For more information on Azure AD multifactor authentication, see [What is Azure
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator)
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **MFA registration policy**. 1. Under **Assignments** > **Users** 1. Under **Include**, select **All users** or **Select individuals and groups** if limiting your rollout.
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Before organizations enable remediation policies, they may want to [investigate]
### User risk policy in Conditional Access
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
After confirming your settings using [report-only mode](../conditional-access/ho
### Sign-in risk policy in Conditional Access
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Administrators are given two options when resetting a password for their users:
If after investigation and confirming that the user account isn't at risk of being compromised, then you can choose to dismiss the risky user.
-To **Dismiss user risk**, search for and select **Azure AD Risky users** in the Azure portal or the Entra portal, select the affected user, and select **Dismiss user(s) risk**.
+To **Dismiss user risk**, search for and select **Azure AD Risky users** in the Azure portal or the Microsoft Entra admin center, select the affected user, and select **Dismiss user(s) risk**.
When you select **Dismiss user risk**, the user is no longer at risk, and all the risky sign-ins of this user and corresponding risk detections are dismissed as well.
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
Simulating the atypical travel condition is difficult because the algorithm uses
**To simulate an atypical travel risk detection, perform the following steps**: 1. Using your standard browser, navigate to [https://myapps.microsoft.com](https://myapps.microsoft.com).
-2. Enter the credentials of the account you want to generate an atypical travel risk detection for.
-3. Change your user agent. You can change user agent in Microsoft Edge from Developer Tools (F12).
-4. Change your IP address. You can change your IP address by using a VPN, a Tor add-on, or creating a new virtual machine in Azure in a different data center.
-5. Sign-in to [https://myapps.microsoft.com](https://myapps.microsoft.com) using the same credentials as before and within a few minutes after the previous sign-in.
+1. Enter the credentials of the account you want to generate an atypical travel risk detection for.
+1. Change your user agent. You can change user agent in Microsoft Edge from Developer Tools (F12).
+1. Change your IP address. You can change your IP address by using a VPN, a Tor add-on, or creating a new virtual machine in Azure in a different data center.
+1. Sign-in to [https://myapps.microsoft.com](https://myapps.microsoft.com) using the same credentials as before and within a few minutes after the previous sign-in.
The sign-in shows up in the Identity Protection dashboard within 2-4 hours.
The sign-in shows up in the Identity Protection dashboard within 2-4 hours.
This risk detection indicates that the application's valid credentials have been leaked. This leak can occur when someone checks in the credentials in a public code artifact on GitHub. Therefore, to simulate this detection, you need a GitHub account and can [sign up a GitHub account](https://docs.github.com/get-started/signing-up-for-github) if you don't have one already.
-**To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Browse to **Azure Active Directory** > **App registrations**.
-3. Select **New registration** to register a new application or reuse an existing stale application.
-4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit.
+### Simulate Leaked Credentials in GitHub for Workload Identities
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
+1. Browse to **Azure Active Directory** > **App registrations**.
+1. Select **New registration** to register a new application or reuse an existing stale application.
+1. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit.
> [!Note] > **You can not retrieve the secret again after you leave this page**.
-5. Get the TenantID and Application(Client)ID in the **Overview** page.
-6. Ensure you disable the application via **Azure Active Directory** > **Enterprise Application** > **Properties** > Set **Enabled for users to sign-in** to **No**.
-7. Create a **public** GitHub Repository, add the following config and commit the change as a file with the .txt extension.
+1. Get the TenantID and Application(Client)ID in the **Overview** page.
+1. Ensure you disable the application via **Azure Active Directory** > **Enterprise Application** > **Properties** > Set **Enabled for users to sign-in** to **No**.
+1. Create a **public** GitHub Repository, add the following config and commit the change as a file with the .txt extension.
```GitHub file "AadClientId": "XXXX-2dd4-4645-98c2-960cf76a4357", "AadSecret": "p3n7Q~XXXX", "AadTenantDomain": "XXXX.onmicrosoft.com", "AadTenantId": "99d4947b-XXX-XXXX-9ace-abceab54bcd4", ```
-7. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit.
+1. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit.
## Testing risk policies
This section provides you with steps for testing the user and the sign-in risk p
To test a user risk security policy, perform the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. 1. Select **Configure user risk policy**. 1. Under **Assignments**
To test a user risk security policy, perform the following steps:
To test a sign-in risk policy, perform the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator).
1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. 1. Select **Configure sign-in risk policy**. 1. Under **Assignments**
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Last updated 07/12/2023 -+ # Azure Active Directory PowerShell examples for Application Management
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Last updated 11/22/2022 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Last updated 03/16/2023 -+ zone_pivot_groups: home-realm-discovery- #customer intent: As and admin, I want to configure Home Realm Discovery for Azure AD authentication for federated users.
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Last updated 3/28/2023 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want configure permission classifications for applications in Azure AD
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
Last updated 11/17/2021 --+ #customer intent: As an admin, I want to configure risk-based step-up consent. # Configure risk-based step-up consent using PowerShell
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Last updated 09/06/2022 --+ #customer intent: As an admin, I want to configure group owner consent to apps accessing group data using Azure AD
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Last updated 06/21/2023
zone_pivot_groups: enterprise-apps-all-+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application.
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Last updated 2/23/2023 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want to disable user sign-in for an application so that no user can sign in to it in Azure Active Directory. # Disable user sign-in for an application
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
zone_pivot_groups: enterprise-apps-all--+ #customer intent: As an admin, I want to hide an enterprise application from user's experience so that it is not listed in the user's Active directory access portals or Microsoft 365 launchers
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Last updated 01/02/2023 --+ # Home Realm Discovery for an application
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Last updated 06/15/2023
-+ # Configure Azure Active Directory SAML token encryption
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
zone_pivot_groups: enterprise-apps-all --+ #customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.- # Review permissions granted to enterprise applications
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
Many organizations use Active Directory Federation Services (AD FS) to provide single sign-on to cloud applications. There are significant benefits to moving your AD FS applications to Azure AD for authentication, especially in terms of cost management, risk management, productivity, compliance, and governance. But understanding which applications are compatible with Azure AD and identifying specific migration steps can be time consuming.
-The AD FS application activity report in the [Entra portal](https://entra.microsoft.com) lets you quickly identify which of your applications are capable of being migrated to Azure AD. It assesses all AD FS applications for compatibility with Azure AD, checks for any issues, and gives guidance on preparing individual applications for migration. With the AD FS application activity report, you can:
+The AD FS application activity report in the [Microsoft Entra admin center](https://entra.microsoft.com) lets you quickly identify which of your applications are capable of being migrated to Azure AD. It assesses all AD FS applications for compatibility with Azure AD, checks for any issues, and gives guidance on preparing individual applications for migration. With the AD FS application activity report, you can:
* **Discover AD FS applications and scope your migration.** The AD FS application activity report lists all AD FS applications in your organization that have had an active user login in the last 30 days. The report indicates an apps readiness for migration to Azure AD. The report doesn't display Microsoft related relying parties in AD FS such as Office 365. For example, relying parties with name 'urn:federation:MicrosoftOnline'.
The AD FS application activity data is available to users who are assigned any o
## Discover AD FS applications that can be migrated
-The AD FS application activity report is available in the [Entra portal](https://entra.microsoft.com) under Azure AD **Usage & insights** reporting. The AD FS application activity report analyzes each AD FS application to determine if it can be migrated as-is, or if additional review is needed.
+The AD FS application activity report is available in the [Microsoft Entra admin center](https://entra.microsoft.com) under Azure AD **Usage & insights** reporting. The AD FS application activity report analyzes each AD FS application to determine if it can be migrated as-is, or if additional review is needed.
-1. Sign in to the [Entra portal](https://entra.microsoft.com) with an admin role that has access to AD FS application activity data (global administrator, reports reader, security reader, application administrator, or cloud application administrator).
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with an admin role that has access to AD FS application activity data (global administrator, reports reader, security reader, application administrator, or cloud application administrator).
2. Select **Azure Active Directory**, and then select **Enterprise applications**.
active-directory Migrate Adfs Apps Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-stages.md
Update the configuration of your production app to point to your production Azur
Your line-of-business apps are those that your organization developed or those that are a standard packaged product.
-Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Entra portal](https://entra.microsoft.com/#home).
+Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Microsoft Entra admin center](https://entra.microsoft.com/#home).
## Next steps
active-directory Migrate Adfs Represent Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-represent-security-policies.md
Explicit group authorization in AD FS:
To map this rule to Azure AD:
-1. In the [Entra portal](https://entra.microsoft.com/#home), [create a user group](../fundamentals/how-to-manage-groups.md) that corresponds to the group of users from AD FS.
+1. In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), [create a user group](../fundamentals/how-to-manage-groups.md) that corresponds to the group of users from AD FS.
1. Assign app permissions to the group: :::image type="content" source="media/migrate-adfs-represent-security-policies/allow-a-group-explicitly-2.png" alt-text="Screenshot shows how to add a user assignment to the app.":::
Explicit user authorization in AD FS:
To map this rule to Azure AD:
-* In the [Entra portal](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below:
+* In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below:
:::image type="content" source="media/migrate-adfs-represent-security-policies/authorize-a-specific-user-2.png" alt-text="Screenshot shows My SaaS apps in Azure.":::
The following are examples of types of MFA rules in AD FS, and how you can map t
MFA rule settings in AD FS: ### Example 1: Enforce MFA based on users/groups
Emit attributes as Claims rule in AD FS:
To map the rule to Azure AD:
-1. In the [Entra portal](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration:
+1. In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration:
:::image type="content" source="media/migrate-adfs-represent-security-policies/map-emit-attributes-as-claims-rule-2.png" alt-text="Screenshot shows the Single sign-on page for your Enterprise Application.":::
In this table, we've listed some useful Permit and Except options and how they m
| From Devices with Specific Trust Level| Set this from the **Device State** control under Assignments -> Conditions| Use the **Exclude** option under Device State Condition and Include **All devices** | | With Specific Claims in the Request| This setting can't be migrated| This setting can't be migrated |
-Here's an example of how to configure the Exclude option for trusted locations in the Entra portal:
+Here's an example of how to configure the Exclude option for trusted locations in the Microsoft Entra admin center:
:::image type="content" source="media/migrate-adfs-represent-security-policies/map-built-in-access-control-policies-3.png" alt-text="Screenshot of mapping access control policies.":::
Your existing external users can be set up in these two ways in AD FS:
As you progress with your migration, you can take advantage of the benefits that [Azure AD B2B](../external-identities/what-is-b2b.md) offers by migrating these users to use their own corporate identity when such an identity is available. This streamlines the process of signing in for those users, as they're often signed in with their own corporate sign-in. Your organization's administration is easier as well, by not having to manage accounts for external users. - **Federated external Identities**ΓÇöIf you're currently federating with an external organization, you have a few approaches to take:
- - [Add Azure Active Directory B2B collaboration users in the Entra portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to.
+ - [Add Azure Active Directory B2B collaboration users in the Microsoft Entra admin center](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to.
- [Create a self-service B2B sign-up workflow](../external-identities/self-service-portal.md) that generates a request for individual users at your partner organization using the B2B invitation API. No matter how your existing external users are configured, they likely have permissions that are associated with their account, either in group membership or specific permissions. Evaluate whether these permissions need to be migrated or cleaned up.
active-directory Migrate Adfs Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-saml-based-sso.md
Apps that you can move easily today include SAML 2.0 apps that use the standard
The following require more configuration steps to migrate to Azure AD: * Custom authorization or multi-factor authentication (MFA) rules in AD FS. You configure them using the [Azure AD Conditional Access](../conditional-access/overview.md) feature.
-* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Entra portal interface.
+* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Microsoft Entra admin center interface.
* WS-Federation apps such as SharePoint apps that require SAML version 1.1 tokens. You can configure them manually using PowerShell. You can also add a preintegrated generic template for SharePoint and SAML 1.1 applications from the gallery. We support the SAML 2.0 protocol. * Complex claims issuance transforms rules. For information about supported claims mappings, see: * [Claims mapping in Azure Active Directory](../develop/saml-claims-customization.md).
Migration requires assessing how the application is configured on-premises, and
The following table describes some of the most common mapping of settings between an AD FS Relying Party Trust to Azure AD Enterprise Application: * AD FSΓÇöFind the setting in the AD FS Relying Party Trust for the app. Right-click the relying party and select Properties.
-* Azure ADΓÇöThe setting is configured within [Entra portal](https://entra.microsoft.com/#home) in each application's SSO properties.
+* Azure ADΓÇöThe setting is configured within [Microsoft Entra admin center](https://entra.microsoft.com/#home) in each application's SSO properties.
| Configuration setting| AD FS| How to configure in Azure AD| SAML Token | | - | - | - | - |
The following table describes some of the most common mapping of settings betwee
Configure your applications to point to Azure AD versus AD FS for SSO. Here, we're focusing on SaaS apps that use the SAML protocol. However, this concept extends to custom line-of-business apps as well. > [!NOTE]
-> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Entra portal](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**:
+> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Microsoft Entra admin center](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**:
* Select Directory ID to see your Tenant ID. * Select Application ID to see your Application ID.
SaaS apps need to know where to send authentication requests and how to validate
| - | - | - | | **IdP Sign-on URL** <p>Sign-on URL of the IdP from the app's perspective (where the user is redirected for sign-in).| The AD FS sign-on URL is the AD FS federation service name followed by "/adfs/ls/." <p>For example: `https://fs.contoso.com/adfs/ls/`| Replace {tenant-id} with your tenant ID. <p> ΓÇÄFor apps that use the SAML-P protocol: [https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p>ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/{tenant-id}/wsfed](https://login.microsoftonline.com/{tenant-id}/wsfed) | | **IdP sign-out URL**<p>Sign-out URL of the IdP from the app's perspective (where the user is redirected when they choose to sign out of the app).| The sign-out URL is either the same as the sign-on URL, or the same URL with "wa=wsignout1.0" appended. For example: `https://fs.contoso.com/adfs/ls/?wa=wsignout1.0`| Replace {tenant-id} with your tenant ID.<p>For apps that use the SAML-P protocol:<p>[https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p> ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0](https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0) |
-| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Entra portal in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. |
+| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Microsoft Entra admin center in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. |
| **Identifier/ "issuer"**<p>Identifier of the IdP from the app's perspective (sometimes called the "issuer ID").<p>ΓÇÄIn the SAML token, the value appears as the Issuer element.| The identifier for AD FS is usually the federation service identifier in AD FS Management under **Service > Edit Federation Service Properties**. For example: `http://fs.contoso.com/adfs/services/trust`| Replace {tenant-id} with your tenant ID.<p>https:\//sts.windows.net/{tenant-id}/ | | **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). |
active-directory Migrate Okta Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md
Last updated 05/23/2023 -+ # Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication
active-directory Migrate Okta Sync Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md
Last updated 05/23/2023 -+ # Tutorial: Migrate Okta sync provisioning to Azure AD Connect synchronization
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Last updated 03/16/2023
zone_pivot_groups: home-realm-discovery--+ #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy # Disable auto-acceleration sign-in
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Last updated 06/21/2023 -+ zone_pivot_groups: enterprise-apps-minus-portal #Customer intent: As an administrator of an Azure AD tenant, I want to restore a soft deleted enterprise application.
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
-+ Last updated 07/12/2023
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
Last updated 05/12/2022 -+ # Assign a managed identity access to an application role using PowerShell
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
Last updated 05/10/2023 -+ # Configure managed identities for Azure resources on an Azure VM using PowerShell
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
na Previously updated : 6/7/2023 Last updated : 8/15/2023
One group can be an eligible member of another group, even if one of those group
If a user is an active member of Group A, and Group A is an eligible member of Group B, the user can activate their membership in Group B. This activation will be only for the user that requested the activation for, it does not mean that the entire Group A becomes an active member of Group B.
+## Privileged Identity Management and app provisioning (Public Preview)
+
+If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol.
+
+In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM.
+Provisioning configuration depends on the application. Generally, we recommend having at least two groups assigned to the application. Depending on the number of roles in your application, you may choose to define additional ΓÇ£privileged groups.ΓÇ¥:
++
+|Group|Purpose|Members|Group membership|Role assigned in the application|
+|--|--|--|--|--|
+|All users group|Ensure that all users that need access to the application are constantly provisioned to the application.|All users that need to access application.|Active|None, or low-privileged role|
+|Privileged group|Provide just-in-time access to privileged role in the application.|Users that need to have just-in-time access to privileged role in the application.|Eligible|Privileged role|
+ ## Next steps - [Bring groups into Privileged Identity Management](groups-discover-groups.md)
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md
We support all Microsoft 365 roles in the Azure AD Roles and Administrators port
> [!NOTE] > - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.
-> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-device-administrator-role).
+> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role).
## Next steps
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
++
+ Title: Logs available for streaming to endpoints from Azure Active Directory
+description: Learn about the Azure Active Directory logs available for streaming to an endpoint for storage, analysis, or monitoring.
+++++++ Last updated : 08/09/2023+++++
+# Learn about the identity logs you can stream to an endpoint
+
+Using Diagnostic settings in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
+
+This article describes the logs that you can route to an endpoint from Azure AD Diagnostic settings.
+
+## Prerequisites
+
+Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Azure AD tenant.
+
+To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles.
+
+- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md)
+- [Archive logs to a storage account](howto-archive-logs-to-storage-account.md)
+- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md)
+- [Send to a partner solution](../../partner-solutions/overview.md)
+
+## Activity log options
+
+The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal.
+
+### Audit logs
+
+The `AuditLogs` report capture changes to applications, groups, users, and licenses in your Azure AD tenant. Once you've routed your audit logs, you can filter or analyze by date/time, the service that logged the event, and who made the change. For more information, see [Audit logs](concept-audit-logs.md).
+
+### Sign-in logs
+
+The `SignInLogs` send the interactive sign-in logs, which are logs generated by your users signing in. Sign-in logs are generated by users providing their username and password on an Azure AD sign-in screen or passing an MFA challenge. For more information, see [Interactive user sign-ins](concept-all-sign-ins.md#interactive-user-sign-ins).
+
+### Non-interactive sign-in logs
+
+The `NonInteractiveUserSIgnInLogs` are sign-ins done on behalf of a user, such as by a client app. The device or client uses a token or code to authenticate or access a resource on behalf of a user. For more information, see [Non-interactive user sign-ins](concept-all-sign-ins.md#non-interactive-user-sign-ins).
+
+### Service principal sign-in logs
+
+If you need to review sign-in activity for apps or service principals, the `ServicePrincipalSignInLogs` may be a good option. In these scenarios, certificates or client secrets are used for authentication. For more information, see [Service principal sign-ins](concept-all-sign-ins.md#service-principal-sign-ins).
+
+### Managed identity sign-in logs
+
+The `ManagedIdentitySignInLogs` provide similar insights as the service principal sign-in logs, but for managed identities, where Azure manages the secrets. For more information, see [Managed identity sign-ins](concept-all-sign-ins.md#managed-identity-for-azure-resources-sign-ins).
+
+### Provisioning logs
+
+If your organization provisions users through a third-party application such as Workday or ServiceNow, you may want to export the `ProvisioningLogs` reports. For more information, see [Provisioning logs](concept-provisioning-logs.md).
+
+### AD FS sign-in logs
+
+Sign-in activity for Active Directory Federated Services (AD FS) applications are captured in this Usage and insight reports. You can export the `ADFSSignInLogs` report to monitor sign-in activity for AD FS applications. For more information, see [AD FS sign-in logs](concept-usage-insights-report.md#ad-fs-application-activity).
+
+### Risky users
+
+The `RiskyUsers` logs identify users who are at risk based on their sign-in activity. This report is part of Azure AD Identity Protection and uses sign-in data from Azure AD. For more information, see [What is Azure AD Identity Protection?](../identity-protection/overview-identity-protection.md).
+
+### User risk events
+
+The `UserRiskEvents` logs are part of Azure AD Identity Protection. These logs capture details about risky sign-in events. For more information, see [How to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins).
+
+### Risky service principals
+
+The `RiskyServicePrincipals` logs provide information about service principals that Azure AD Identity Protection detected as risky. Service principal risk represents the probability that an identity or account is compromised. These risks are calculated asynchronously using data and patterns from Microsoft's internal and external threat intelligence sources. These sources may include security researchers, law enforcement professionals, and security teams at Microsoft. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md)
+
+### Service principal risk events
+
+The `ServicePrincipalRiskEvents` logs provide details around the risky sign-in events for service principals. These logs may include any identified suspicious events related to the service principal accounts. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md)
+
+### Enriched Microsoft 365 audit logs
+
+The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you can enable for Microsoft Entra Internet Access. Selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet to secure access to your Microsoft 365 traffic *and* you enabled the enriched logs. For more information, see [How to use the Global Secure Access enriched Microsoft 365 logs](../../global-secure-access/how-to-view-enriched-logs.md).
+
+### Microsoft Graph activity logs
+
+The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in preview. The logs are visible in Azure AD, but selecting these options won't add new logs to your workspace unless your organization was included in the preview.
+
+### Network access traffic logs
+
+The `NetworkAccessTrafficLogs` logs are associated with Microsoft Entra Internet Access and Microsoft Entra Private Access. The logs are visible in Azure AD, but selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet Access and Microsoft Entra Private Access to secure access to your corporate resources. For more information, see [What is Global Secure Access?](../../global-secure-access/overview-what-is-global-secure-access.md).
+
+## Next steps
+
+- [Learn about the sign-ins logs](concept-all-sign-ins.md)
+- [Explore how to access the activity logs](howto-access-activity-logs.md)
active-directory Concept Log Monitoring Integration Options Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-log-monitoring-integration-options-considerations.md
+
+ Title: Azure Active Directory activity log integration options and considerations
+description: Introduction to the options and considerations for integrating Azure Active Directory activity logs with storage and analysis tools.
+++++++ Last updated : 08/09/2023+++
+# Azure AD activity log integrations
+
+Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs.
+
+With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered.
+
+## Supported reports
+
+The following logs can be integrated with one of many endpoints:
+
+* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant.
+* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors.
+* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications.
+* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity.
+* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization.
+
+## Integration options
+
+To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories:
+
+- Troubleshooting
+- Long-term storage
+- Analysis and monitoring
+
+### Troubleshooting
+
+If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed.
+
+If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options.
+
+### Long-term storage
+
+If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often.
+
+If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options.
+
+### Analysis and monitoring
+
+If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring.
+
+If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools.
+
+If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub.
+
+## Cost considerations
+
+There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day.
+
+Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day.
+
+Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles:
+
+- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md)
+- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md)
+- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md)
+
+Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md).
+
+## Estimate your costs
+
+To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint.
+
+The following factors could affect costs for your organization:
+
+- Audit log events use around 2 KB of data storage
+- Sign-in log events use on average 11.5 KB of data storage
+- A tenant of about 100,000 users could incur about 1.5 million events per day
+- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame
+
+### Daily log size
+
+To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start:
+
+- 1000 records
+- For large tenants, 15 minutes of sign-ins
+- For small to medium tenants, 1 hour of sign-ins
+
+You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly.
+
+With the data sample captured, multiply accordingly to find out how large the file would be for one day.
+
+### Estimate the daily cost
+
+To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase.
+
+To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done.
+
+With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts.
+
+![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png)
+
+Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost.
+
+## Calculate estimated costs
+
+From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products.
+
+- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/)
+- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/)
+- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/)
+- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)
+
+Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices.
+
+![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png)
+
+## Next steps
+
+* [Create a storage account](../../storage/common/storage-account-create.md)
+* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md)
+* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md)
+* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Howto Access Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md
Title: Access activity logs in Azure AD
-description: Learn how to choose the right method for accessing the activity logs in Azure AD.
+description: Learn how to choose the right method for accessing the activity logs in Azure Active Directory.
Previously updated : 07/26/2023 Last updated : 08/08/2023 --
-# How To: Access activity logs in Azure AD
+# How to access activity logs in Azure AD
-The data in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with various options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario.
+The data collected in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with several options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario.
You can access Azure AD activity logs and reports using the following methods:
Each of these methods provides you with capabilities that may align with certain
## Prerequisites
-The required roles and licenses may vary based on the report. Global Administrator can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
+The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
| Log / Report | Roles | Licenses | |--|--|--|
The required roles and licenses may vary based on the report. Global Administrat
| Usage and insights | Security Reader<br>Reports Reader<br> Security Administrator | Premium P1/P2 | | Identity Protection* | Security Administrator<br>Security Operator<br>Security Reader<br>Global Reader | Azure AD Free/Microsoft 365 Apps<br>Azure AD Premium P1/P2 |
-*The level of access and capabilities for Identity Protection varies with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements).
+*The level of access and capabilities for Identity Protection vary with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements).
Audit logs are available for features that you've licensed. To access the sign-ins logs using the Microsoft Graph API, your tenant must have an Azure AD Premium license associated with it.
The SIEM tools you can integrate with your event hub can provide analysis and mo
## Access logs with Microsoft Graph API
-The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance.
+The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app.
### Recommended uses
We recommend manually downloading and storing your activity logs if you have bud
Use the following basic steps to archive or download your activity logs.
-### Archive activity logs to a storage account
+#### Archive activity logs to a storage account
1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. 1. Create a storage account.
active-directory Howto Archive Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-archive-logs-to-storage-account.md
+
+ Title: How to archive activity logs to a storage account
+description: Learn how to archive Azure Active Directory logs to a storage account
+++++++ Last updated : 08/09/2023+++
+# Customer intent: As an IT administrator, I want to learn how to archive Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
++
+# How to archive Azure AD logs to an Azure storage account
+
+If you need to store Azure Active Directory (Azure AD) activity logs for longer than the [default retention period](reference-reports-data-retention.md), you can archive your logs to a storage account.
+
+## Prerequisites
+
+To use this feature, you need:
+
+* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+* A user who's a *Security Administrator* or *Global Administrator* for the Azure AD tenant.
+
+## Archive logs to an Azure storage account
++
+6. Under **Destination Details** select the **Archive to a storage account** check box.
+
+7. Select the appropriate **Subscription** and **Storage account** from the menus.
+
+ ![Diagnostics settings](media/howto-archive-logs-to-storage-account/diagnostic-settings-storage.png)
+
+8. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
+
+ > [!NOTE]
+ > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
+
+9. Select **Save** to save the setting.
+
+10. Close the window to return to the Diagnostic settings pane.
+
+## Next steps
+
+- [Learn about other ways to access activity logs](howto-access-activity-logs.md)
+- [Manually download activity logs](howto-download-logs.md)
+- [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md)
+- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md)
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
# Prerequisites to access the Azure Active Directory reporting API
-The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs.
+The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance.
This article describes how to enable Microsoft Graph to access the Azure AD reporting APIs in the Azure portal and through PowerShell
To get access to the reporting data through the API, you need to have one of the
- Security Administrator - Global Administrator
-In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
+In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. If the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) in one of the required roles.
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
-# How to: Download logs in Azure Active Directory
+# How to download logs in Azure Active Directory
The Azure Active Directory (Azure AD) portal gives you access to three types of activity logs:
active-directory Howto Integrate Activity Logs With Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md
- Title: Integrate logs with ArcSight using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor
------- Previously updated : 10/31/2022------
-# Integrate Azure Active Directory logs with ArcSight using Azure Monitor
-
-[Micro Focus ArcSight](https://software.microfocus.com/products/siem-security-information-event-management/overview) is a security information and event management (SIEM) solution that helps you detect and respond to security threats in your platform. You can now route Azure Active Directory (Azure AD) logs to ArcSight using Azure Monitor using the ArcSight connector for Azure AD. This feature allows you to monitor your tenant for security compromise using ArcSight.
-
-In this article, you learn how to route Azure AD logs to ArcSight using Azure Monitor.
-
-## Prerequisites
-
-To use this feature, you need:
-* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
-
-Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
-
-## Integrate Azure AD logs with ArcSight
-
-1. First, complete the steps in the **Prerequisites** section of the configuration guide. This section includes the following steps:
- * Set user permissions in Azure, to ensure there's a user with the **owner** role to deploy and configure the connector.
- * Open ports on the server with Syslog NG Daemon SmartConnector, so it's accessible from Azure.
- * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector.
-
-2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
-
-3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
- * The requisite Azure functions are created in your Azure subscription.
- * The Azure AD logs are streamed to the correct destination.
- * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps.
- * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
-
-4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
-
-5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
-
-## Next steps
-
-[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292)
active-directory Howto Integrate Activity Logs With Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md
+
+ Title: Integrate Azure Active Directory logs with Azure Monitor logs
+description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs for querying and analysis.
+++++++ Last updated : 08/08/2023++++
+# Integrate Azure AD logs with Azure Monitor logs
+
+Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so your sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data.
+
+This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor.
+
+Use the integration of Azure AD activity logs and Azure Monitor to perform the following tasks:
+
+- Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud.
+- Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights.
+- Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment.
+- Identify sign-ins from applications still using the Active Directory Authentication Library (ADAL) for authentication. [Learn about the ADAL end-of-support plan.](../develop/msal-migration.md)
+
+> [!NOTE]
+> Integrating Azure Active Directory logs with Azure Monitor automatically enables the Azure Active Directory data connector within Microsoft Sentinel.
+
+## How do I access it?
+
+To use this feature, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+* An Azure AD Premium P1 or P2 tenant.
+* **Global Administrator** or **Security Administrator** access for the Azure AD tenant.
+* A **Log Analytics workspace** in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+* Permission to access data in a Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
+
+## Create a Log Analytics workspace
+
+A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+
+Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
+
+## Send logs to Azure Monitor
+
+Follow the steps below to send logs from Azure Active Directory to Azure Monitor logs. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
++
+6. Under **Destination Details** select the **Send to Log Analytics workspace** check box.
+
+7. Select the appropriate **Subscription** and **Log Analytics workspace** from the menus.
+
+8. Select the **Save** button.
+
+ ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-azure-monitor-logs/diagnostic-settings-log-analytics-workspace.png)
+
+If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs.
+
+## Next steps
+
+* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
+* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
+* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
- Title: Integrate Azure Active Directory logs with Azure Monitor | Microsoft Docs
-description: Learn how to integrate Azure Active Directory logs with Azure Monitor
------- Previously updated : 06/26/2023-----
-# How to integrate Azure AD logs with Azure Monitor logs
-
-Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. Integrating Azure AD logs with Azure Monitor logs enables rich visualizations, monitoring, and alerting on the connected data.
-
-This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor Logs.
-
-## Roles and licenses
-
-To integrate Azure AD logs with Azure Monitor, you need the following roles and licenses:
-
-* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-
-* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD.
-
-* **Security Administrator access for the Azure AD tenant:** This role is required to set up the Diagnostics settings.
-
-* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
-
-## Integrate logs with Azure Monitor logs
-
-To send Azure AD logs to Azure Monitor Logs you must first have a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md). Then you can set up the Diagnostics settings in Azure AD to send your activity logs to that workspace.
-
-### Create a Log Analytics workspace
-
-A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
-
-Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
-
-### Set up Diagnostics settings
-
-Once you have a Log Analytics workspace created, follow the steps below to send logs from Azure Active Directory to that workspace.
--
-Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator**.
-
-1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from the Audit logs or Sign-in logs.
-
-1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** to change an existing integration.
-
-1. Enter a **Diagnostic setting name**. If you're editing an existing integration, you can't change the name.
-
-1. Any or all of the following logs can be sent to the Log Analytics workspace. Some logs may be in public preview but still visible in the portal.
- * `AuditLogs`
- * `SignInLogs`
- * `NonInteractiveUserSignInLogs`
- * `ServicePrincipalSignInLogs`
- * `ManagedIdentitySignInLogs`
- * `ProvisioningLogs`
- * `ADFSSignInLogs` Active Directory Federation Services (ADFS)
- * `RiskyServicePrincipals`
- * `RiskyUsers`
- * `ServicePrincipalRiskEvents`
- * `UserRiskEvents`
-
-1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview.
- * `EnrichedOffice365AuditLogs`
- * `MicrosoftGraphActivityLogs`
- * `NetworkAccessTrafficLogs`
-
-1. In the **Destination details**, select **Send to Log Analytics workspace** and choose the appropriate details from the menus that appear.
- * You can also send logs to any or all of the following destinations. Additional fields appear, depending on your selection.
- * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear.
- * **Stream to an event hub:** Select the appropriate details from the menus that appear.
- * **Send to partner solution:** Select the appropriate details from the menus that appear.
-
-1. Select **Save** to save the setting.
-
- ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-log-analytics/Configure.png)
-
-If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs.
-
-> [!NOTE]
-> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel.
-
-## Next steps
-
-* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
-* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
-* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
- Title: Integrate Splunk using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor.
------- Previously updated : 10/31/2022------
-# How to: Integrate Azure Active Directory logs with Splunk using Azure Monitor
-
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with Splunk by using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with Splunk.
-
-## Prerequisites
-
-To use this feature, you need:
--- An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). --- The [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). -
-## Integrate Azure Active Directory logs
-
-1. Open your Splunk instance, and select **Data Summary**.
-
- ![The "Data Summary" button](./media/howto-integrate-activity-logs-with-splunk/DataSummary.png)
-
-2. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
-
- ![The Data Summary Sourcetypes tab](./media/howto-integrate-activity-logs-with-splunk/source-eventhub.png)
-
-Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
-
- ![Activity logs](./media/howto-integrate-activity-logs-with-splunk/activity-logs.png)
-
-> [!NOTE]
-> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
->
-
-## Next steps
-
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
- Title: Stream logs to SumoLogic using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor.
------- Previously updated : 10/31/2022------
-# Integrate Azure Active Directory logs with SumoLogic using Azure Monitor
-
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with SumoLogic using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with SumoLogic.
-
-## Prerequisites
-
-To use this feature, you need:
-* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A SumoLogic single sign-on enabled subscription.
-
-## Steps to integrate Azure AD logs with SumoLogic
-
-1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
-3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
-
- ![Dashboard](./media/howto-integrate-activity-logs-with-sumologic/overview-dashboard.png)
-
-## Next steps
-
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
+ Last updated 05/02/2023
active-directory Howto Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-stream-logs-to-event-hub.md
+
+ Title: Stream Azure Active Directory logs to an event hub
+description: Learn how to stream Azure Active Directory logs to an event hub for SIEM tool integration.
+++++++ Last updated : 08/08/2023+++
+# How to stream activity logs to an event hub
+
+Your Azure Active Directory (Azure AD) tenant produces large amounts of data every second. Sign-in activity and logs of changes made in your tenant add up to a lot of data that can be hard to analyze. Integrating with Security Information and Event Management (SIEM) tools can help you gain insights into your environment.
+
+This article shows how you can stream your logs to an event hub, to integrate with one of several SIEM tools.
+
+## Prerequisites
+
+To stream logs to a SIEM tool, you first need to create an **Azure event hub**.
+
+Once you have an event hub that contains Azure AD activity logs, you can set up the SIEM tool integration using the **Azure AD Diagnostics Settings**.
+
+## Stream logs to an event hub
++
+6. Select the **Stream to an event hub** check box.
+
+7. Select the Azure subscription, Event Hubs namespace, and optional event hub where you want to route the logs.
+
+The subscription and Event Hubs namespace must both be associated with the Azure AD tenant from where you're streaming the logs.
+
+Once you have the Azure event hub ready, navigate to the SIEM tool you want to integrate with the activity logs. You'll finish the process in the SIEM tool.
+
+We currently support Splunk, SumoLogic, and ArcSight. Select a tab below to get started. Refer to the tool's documentation.
+
+# [Splunk](#tab/splunk)
+
+To use this feature, you need the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details).
+
+### Integrate Azure AD logs with Splunk
+
+1. Open your Splunk instance and select **Data Summary**.
+
+ ![The "Data Summary" button](./media/howto-stream-logs-to-event-hub/datasummary.png)
+
+1. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
+
+ ![The Data Summary Sourcetypes tab](./media/howto-stream-logs-to-event-hub/source-eventhub.png)
+
+Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
+
+ ![Activity logs](./media/howto-stream-logs-to-event-hub/activity-logs.png)
+
+If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
+
+# [SumoLogic](#tab/SumoLogic)
+
+To use this feature, you need a SumoLogic single sign-on enabled subscription.
+
+### Integrate Azure AD logs with SumoLogic
+
+1. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
+
+1. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
+
+ ![Dashboard](./media/howto-stream-logs-to-event-hub/overview-dashboard.png)
+
+# [ArcSight](#tab/ArcSight)
+
+To use this feature, you need a configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
+
+Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://software.microfocus.com/products/siem-security-information-event-management/overview). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
+
+## Integrate Azure AD logs with ArcSight
+
+1. Complete the steps in the **Prerequisites** section of the ArcSight configuration guide. This section includes the following steps:
+ * Set user permissions in Azure to ensure there's a user with the **owner** role to deploy and configure the connector.
+ * Open ports on the server with Syslog NG Daemon SmartConnector so it's accessible from Azure.
+ * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector.
+
+1. Follow the steps in the **Deploying the Connector** section of the ArcSight configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
+
+1. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
+ * The requisite Azure functions are created in your Azure subscription.
+ * The Azure AD logs are streamed to the correct destination.
+ * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps.
+ * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
+
+1. Complete the post-deployment steps in the **Post-Deployment Configurations** of the ArcSight configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
+
+1. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
+++
+## Activity log integration options and considerations
+
+If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up **custom tooling** by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md).
+
+**IBM QRadar** is another option for integrating with Azure AD activity logs. The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
+
+Some sign-in categories contain large amounts of log data, depending on your tenantΓÇÖs configuration. In general, the non-interactive user sign-ins and service principal sign-ins can be 5 to 10 times larger than the interactive user sign-ins.
+
+## Next steps
+
+- [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
+- [Use Microsoft Graph to access Azure AD activity logs](quickstart-access-log-with-graph-api.md)
active-directory Howto Use Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md
Previously updated : 07/14/2023 Last updated : 08/10/2023
-# How to: Use Azure AD recommendations
+# How to use Azure Active Directory Recommendations
The Azure Active Directory (Azure AD) recommendations feature provides you with personalized insights with actionable guidance to:
active-directory Howto Use Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-workbooks.md
+
+ Title: Azure Monitor workbooks for Azure Active Directory
+description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports.
+++++++ Last updated : 08/10/2023++++
+# How to use Azure Active Directory Workbooks
+
+Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD.
+
+When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch.
+
+- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks.
+- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.
+
+## Prerequisites
+
+To use Azure Workbooks for Azure AD, you need:
+
+- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md)
+- A Log Analytics workspace *and* access to that workspace
+- The appropriate roles for Azure Monitor *and* Azure AD
+
+### Log Analytics workspace
+
+You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data.
+
+For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md).
+
+### Azure Monitor roles
+
+Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access.
+
+- **View**:
+ - Monitoring Reader
+ - Log Analytics Reader
+
+- **View and modify settings**:
+ - Monitoring Contributor
+ - Log Analytics Contributor
+
+For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader).
+
+For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+
+### Azure AD roles
+
+Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace.
+
+- **Read**:
+ - Reports Reader
+ - Security Reader
+ - Global Reader
+
+- **Update**:
+ - Security Administrator
+
+For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md).
+
+## How to access Azure Workbooks for Azure AD
++
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
+ - **Workbooks**: All workbooks created in your tenant
+ - **Public Templates**: Prebuilt workbooks for common or high priority scenarios
+ - **My Templates**: Templates you've created
+1. Select a report or template from the list. Workbooks may take a few moments to populate.
+ - Search for a template by name.
+ - Select the **Browse across galleries** to view templates that aren't specific to Azure AD.
+
+ ![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
+
+## Create a new workbook
+
+Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json).
+
+**To create a new workbook from scratch**:
+1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**.
+1. Select **+ New**.
+1. Select an element from the **+ Add** menu.
+
+ For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md).
+
+ ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png)
+
+**To create a new workbook from a template**:
+1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**.
+1. Select a workbook template from the Gallery.
+1. Select **Edit** from the top of the page.
+ - Each element of the workbook has its own **Edit** button.
+ - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md)
+
+1. Select the **Edit** button for any element. Make your changes and select **Done editing**.
+ ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png)
+1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name.
+1. In the **Save As** window:
+ - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**.
+ - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md).
+1. Select the **Apply** button.
+
+## Next steps
+
+* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md).
active-directory Overview Monitoring Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md
+
+ Title: What is Azure Active Directory monitoring and health?
+description: Provides a general overview of Azure Active Directory monitoring and health.
+++++++ Last updated : 08/15/2023+++++
+# What is Azure Active Directory monitoring and health?
+
+The features of Azure Active Directory (Azure AD) Monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to:
+
+- Determine how your users utilize your apps and services.
+- Detect potential risks affecting the health of your environment.
+- Troubleshoot issues preventing your users from getting their work done.
+
+Sign-in and audit logs comprise the activity logs behind many Azure AD reports, which can be used to analyze, monitor, and troubleshoot activity in your tenant. Routing your activity logs to an analysis and monitoring solution provides greater insights into your tenant's health and security.
+
+This article describes the types of activity logs available in Azure AD, the reports that use the logs, and the monitoring services available to help you analyze the data.
+
+## Identity activity logs
+
+Activity logs help you understand the behavior of users in your organization. There are three types of activity logs in Azure AD:
+
+- [**Audit logs**](concept-audit-logs.md) include the history of every task performed in your tenant.
+
+- [**Sign-in logs**](concept-all-sign-ins.md) capture the sign-in attempts of your users and client applications.
+
+- [**Provisioning logs**](concept-provisioning-logs.md) provide information around users provisioned in your tenant through a third party service.
+
+The activity logs can be viewed in the Azure portal or using the Microsoft Graph API. Activity logs can also be routed to various endpoints for storage or analysis. To learn about all of the options for viewing the activity logs, see [How to access activity logs](howto-access-activity-logs.md).
+
+### Audit logs
+
+Audit logs provide you with records of system activities for compliance. This data enables you to address common scenarios such as:
+
+- Someone in my tenant got access to an admin group. Who gave them access?
+- I want to know the list of users signing into a specific app because I recently onboarded the app and want to know if itΓÇÖs doing well.
+- I want to know how many password resets are happening in my tenant.
+
+### Sign-in logs
+
+The sign-ins logs enable you to find answers to questions such as:
+
+- What is the sign-in pattern of a user?
+- How many users have users signed in over a week?
+- WhatΓÇÖs the status of these sign-ins?
+
+### Provisioning logs
+
+You can use the provisioning logs to find answers to questions like:
+
+- What groups were successfully created in ServiceNow?
+- What users were successfully removed from Adobe?
+- What users from Workday were successfully created in Active Directory?
+
+## Identity reports
+
+Reviewing the data in the Azure AD activity logs can provide helpful information for IT administrators. To streamline the process of reviewing data on key scenarios, we've created several reports on common scenarios that use the activity logs.
+
+- [Identity Protection](../identity-protection/overview-identity-protection.md) uses sign-in data to create reports on risky users and sign-in activities.
+- Activity related to your applications, such as service principal and app credential activity, are used to create reports in [Usage and insights](concept-usage-insights-report.md).
+- [Azure AD workbooks](overview-workbooks.md) provide a customizable way to view and analyze the activity logs.
+- [Monitor the status of Azure AD recommendations to improve your tenant's security.](overview-recommendations.md)
+
+## Identity monitoring and tenant health
+
+Reviewing Azure AD activity logs is the first step in maintaining and improving the health and security of your tenant. You need to analyze the data, monitor on risky scenarios, and determine where you can make improvements. Azure AD monitoring provides the necessary tools to help you make informed decisions.
+
+Monitoring Azure AD activity logs requires routing the log data to a monitoring and analysis solution. Endpoints include Azure Monitor logs, Microsoft Sentinel, or a third-party solution third-party Security Information and Event Management (SIEM) tool.
+
+- [Stream logs to an event hub to integrate with third-party SIEM tools.](howto-stream-logs-to-event-hub.md)
+- [Integrate logs with Azure Monitor logs.](howto-integrate-activity-logs-with-log-analytics.md)
+- [Analyze logs with Azure Monitor logs and Log Analytics.](howto-analyze-activity-logs-log-analytics.md)
++
+For an overview of how to access, store, and analyze activity logs, see [How to access activity logs](howto-access-activity-logs.md).
++
+## Next steps
+
+- [Learn about the sign-ins logs](concept-all-sign-ins.md)
+- [Learn about the audit logs](concept-audit-logs.md)
+- [Use Microsoft Graph to access activity logs](quickstart-access-log-with-graph-api.md)
+- [Integrate activity logs with SIEM tools](howto-stream-logs-to-event-hub.md)
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
-- Title: What is Azure Active Directory monitoring?
-description: Provides a general overview of Azure Active Directory monitoring.
------- Previously updated : 11/01/2022---
-# Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant.
---
-# What is Azure Active Directory monitoring?
-
-With Azure Active Directory (Azure AD) monitoring, you can now route your Azure AD activity logs to different endpoints. You can then either retain it for long-term use or integrate it with third-party Security Information and Event Management (SIEM) tools to gain insights into your environment.
-
-Currently, you can route the logs to:
--- An Azure storage account.-- An Azure event hub, so you can integrate with your Splunk and Sumologic instances.-- Azure Log Analytics workspace, wherein you can analyze the data, create dashboard and alert on specific events-
-**Prerequisite role**: Global Administrator
-
-> [!VIDEO https://www.youtube.com/embed/syT-9KNfug8]
--
-## Licensing and prerequisites for Azure AD reporting and monitoring
-
-You'll need an Azure AD premium license to access the Azure AD sign-in logs.
-
-For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
-
-To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant.
-
-Depending on the final destination of your log data, you'll need one of the following:
-
-* An Azure storage account that you have ListKeys permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage).
-
-* An Azure Event Hubs namespace to integrate with third-party SIEM solutions.
-
-* An Azure Log Analytics workspace to send logs to Azure Monitor logs.
-
-## Diagnostic settings configuration
-
-To configure monitoring settings for Azure AD activity logs, first sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways:
-
-* Select **Diagnostic settings** from the **Monitoring** section.
-
- ![Diagnostics settings](./media/overview-monitoring/diagnostic-settings.png)
-
-* Select **Audit Logs** or **Sign-ins**, then select **Export settings**.
-
- ![Export settings](./media/overview-monitoring/export-settings.png)
--
-## Route logs to storage account
-
-By routing logs to an Azure storage account, you can retain it for longer than the default retention period outlined in our [retention policies](reference-reports-data-retention.md). Learn how to [route data to your storage account](quickstart-azure-monitor-route-logs-to-storage-account.md).
-
-## Stream logs to event hub
-
-Routing logs to an Azure event hub allows you to integrate with third-party SIEM tools like Sumologic and Splunk. This integration allows you to combine Azure AD activity log data with other data managed by your SIEM, to provide richer insights into your environment. Learn how to [stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md).
-
-## Send logs to Azure Monitor logs
-
-[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) is a solution that consolidates monitoring data from different sources and provides a query language and analytics engine that gives you insights into the operation of your applications and resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor and alert on collected data. Learn how to [send data to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md).
-
-You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-ins and audit events. Learn how to [install and use log analytics views for Azure AD activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md).
-
-## Next steps
-
-* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md)
-* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md)
-* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
-- Title: What are Azure Active Directory reports?
-description: Provides a general overview of Azure Active Directory reports.
------- Previously updated : 02/03/2023---
-# Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment.
---
-# What are Azure Active Directory reports?
-
-Azure Active Directory (Azure AD) reports provide a comprehensive view of activity in your environment. The provided data enables you to:
--- Determine how your apps and services are utilized by your users-- Detect potential risks affecting the health of your environment-- Troubleshoot issues preventing your users from getting their work done -
-## Activity reports
-
-Activity reports help you understand the behavior of users in your organization. There are two types of activity reports in Azure AD:
--- **Audit logs** - The [audit logs activity report](concept-audit-logs.md) provides you with access to the history of every task performed in your tenant.--- **Sign-ins** - With the [sign-ins activity report](concept-sign-ins.md), you can determine, who has performed the tasks reported by the audit logs report.---
-> [!VIDEO https://www.youtube.com/embed/ACVpH6C_NL8]
-
-### Audit logs report
-
-The [audit logs report](concept-audit-logs.md) provides you with records of system activities for compliance. This data enables you to address common scenarios such as:
--- Someone in my tenant got access to an admin group. Who gave them access? --- I want to know the list of users signing into a specific app since I recently onboarded the app and want to know if itΓÇÖs doing well--- I want to know how many password resets are happening in my tenant--
-#### What Azure AD license do you need to access the audit logs report?
-
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/whatis.md#which-features-work-in-azure-ad).
-
-### Sign-ins report
-
-The [sign-ins report](concept-sign-ins.md) enables you to find answers to questions such as:
--- What is the sign-in pattern of a user?-- How many users have users signed in over a week?-- WhatΓÇÖs the status of these sign-ins?-
-#### What Azure AD license do you need to access the sign-ins activity report?
-
-To access the sign-ins activity report, your tenant must have an Azure AD Premium license associated with it.
-
-## Programmatic access
-
-In addition to the user interface, Azure AD also provides you with [programmatic access](./howto-configure-prerequisites-for-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
-
-## Next steps
--- [Risky sign-ins report](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins)-- [Audit logs report](concept-audit-logs.md)-- [Sign-ins logs report](concept-sign-ins.md)
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
- Title: What are Service Health notifications in Azure Active Directory?
-description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them.
------- Previously updated : 11/01/2022-----
-# What are Service Health notifications in Azure Active Directory?
-
-Azure Service Health has been updated to provide notifications to tenant admins within the Azure portal when there are Service Health events for Azure Active Directory services. Due to the criticality of these events, an alert card in the Azure AD overview page will also be provided to support the discovery of these notifications.
-
-## How it works
-
-When there happens to be a Service Health notification for an Azure Active Directory service, it will be posted to the Service Health page within the Azure portal. Previously these were subscription events that were posted to all the subscription owners/readers of subscriptions within the tenant that had an issue. To improve the targeting of these notifications, they'll now be available as tenant events to the tenant admins of the impacted tenant. For a transition period, these service events will be available as both tenant events and subscription events.
-
-Now that they're available as tenant events, they appear on the Azure AD overview page as alert cards. Any Service Health notification that has been updated within the last three days will be shown in one of the cards.
-
-
-![Screenshot of the alert cards on the Azure AD overview page.](./media/overview-service-health-notifications/service-health-overview.png)
---
-Each card:
--- Represents a currently active event, or a resolved one that will be distinguished by the icon in the card. -- Has a link to the event. You can review the event on the Azure Service Health pages. -
-
-![Screenshot of the event on the Azure Service Health page.](./media/overview-service-health-notifications/service-health-issues.png)
--
-
-
-For more information on the new Azure Service Health tenant events, see [Azure Service Health portal updates](../../service-health/service-health-portal-update.md)
-
-## Who will see the notifications
-
-Most of the built-in admin roles will have access to see these notifications. For the complete list of all authorized roles, see [Azure Service Health Tenant Admin authorized roles](../../service-health/admin-access-reference.md). Currently custom roles aren't supported.
-
-## What you should know
-
-Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon.
--
-
---
-## Next steps
--- [Service Health overview](../../service-health/service-health-overview.md)
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
- Title: Tutorial - Archive Azure Active Directory logs to a storage account
-description: Learn how to route Azure Active Directory logs to a storage account
------- Previously updated : 07/14/2023---
-# Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
---
-# Tutorial: Archive Azure AD logs to an Azure storage account
-
-In this tutorial, you learn how to set up Azure Monitor diagnostics settings to route Azure Active Directory (Azure AD) logs to an Azure storage account.
-
-## Prerequisites
-
-To use this feature, you need:
-
-* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-* An Azure AD tenant.
-* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant.
-* To export sign-in data, you must have an Azure AD P1 or P2 license.
-
-## Archive logs to an Azure storage account
--
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Azure Active Directory** > **Monitoring** > **Audit logs**.
-
-1. Select **Export Data Settings**.
-
-1. You can either create a new setting (up to three settings are allowed) or edit an existing setting.
- - To change existing setting, select **Edit setting** next to the diagnostic setting you want to update.
- - To add new settings, select **Add diagnostic setting**.
-
- ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
-
-1. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
-
-1. Under **Destination Details** select the **Archive to a storage account** check box. Text fields for the retention period appear next to each log category.
-
-1. Select the Azure subscription and storage account for you want to route the logs.
-
-1. Select all the relevant categories in under **Category details**:
-
- ![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png)
-
-1. In the **Retention days** field, enter the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
-1. Select **Save**.
-
-1. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
- > [!NOTE]
- > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
-
-1. Select **Save** to save the setting.
-
-1. Close the window to return to the Diagnostic settings pane.
-
-## Next steps
-
-* [Tutorial: Configure a log analytics workspace](tutorial-log-analytics-wizard.md)
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Recommendation Migrate From Adal To Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md
Title: Azure Active Directory recommendation - Migrate from ADAL to MSAL | Microsoft Docs
+ Title: Migrate from ADAL to MSAL recommendation
description: Learn why you should migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. -+ Previously updated : 08/10/2023 Last updated : 08/15/2023 -- # Azure AD recommendation: Migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries
Existing apps that use ADAL will continue to work after the end-of-support date.
## Action plan
-The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK.
-
-### [Azure portal](#tab/Azure-portal)
-
-There are four steps to identifying and updating your apps in the Azure portal. The following steps are covered in detail in the [List all apps using ADAL](../develop/howto-get-list-of-all-auth-library-apps.md) article.
-
-1. Send Azure AD sign-in event to Azure Monitor.
-1. [Access the sign-ins workbook in Azure AD.](../develop/howto-get-list-of-all-auth-library-apps.md)
-1. Identify the apps that use ADAL.
-1. Update your code.
- - The steps to update your code vary depending on the type of application.
- - For example, the steps for .NET and Python applications have separate instructions.
- - For a full list of instructions for each scenario, see [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal).
+The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. The steps for the Microsoft Graph PowerShell SDK are provided in the Recommendation details in the Azure Active Directory portal.
### [Microsoft Graph API](#tab/Microsoft-Graph-API) You can use Microsoft Graph to identify apps that need to be migrated to MSAL. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations).
-Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select **GET** as the HTTP method from the dropdown.
+1. Set the API version to **beta**.
+1. Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
```http https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources
You can run the following set of commands in Windows PowerShell. These commands
+ ## Frequently asked questions ### Why does it take 30 days to change the status to completed?
To reduce false positives, the service uses a 30 day window for ADAL requests. T
### How were ADAL applications identified before the recommendation was released?
-The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) is an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook does not capture Service Principal sign-ins, while the recommendation does.
+The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) was an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook doesn't capture Service Principal sign-ins, while the recommendation does.
### Why is the number of ADAL applications different in the workbook and the recommendation?
active-directory Reference Powershell Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md
-+ # Azure AD PowerShell cmdlets for reporting
active-directory Tutorial Configure Log Analytics Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-configure-log-analytics-workspace.md
+
+ Title: Configure a log analytics workspace in Azure AD
+description: Learn how to configure Log Analytics workspace and run KQL queries on your identity data.
+++++ Last updated : 07/28/2023++++++
+#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment.
+++
+# Tutorial: Configure a log analytics workspace
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure a log analytics workspace for your audit and sign-in logs
+> * Run queries using the Kusto Query Language (KQL)
+> * Create an alert rule that sends alerts when a specific account is used
+> * Create a custom workbook using the quickstart template
+> * Add a query to an existing workbook template
+
+## Prerequisites
+
+- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+
+- An Azure Active Directory (Azure AD) tenant.
+
+- A user who's a Global Administrator or Security Administrator for the Azure AD tenant.
++
+Familiarize yourself with these articles:
+
+- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md)
+
+- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md)
+
+- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md)
+
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+
+- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md)
+++
+## Configure a workspace
++
+This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs.
+Configuring a log analytics workspace consists of two main steps:
+
+1. Creating a log analytics workspace
+2. Setting diagnostic settings
+
+**To configure a workspace:**
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **log analytics workspaces**.
+
+ ![Search resources services and docs](./media/tutorial-log-analytics-wizard/search-services.png)
+
+3. On the log analytics workspaces page, click **Add**.
+
+ ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png)
+
+4. On the **Create Log Analytics workspace** page, perform the following steps:
+
+ ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png)
+
+ 1. Select your subscription.
+
+ 2. Select a resource group.
+
+ 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1).
+
+ 4. Select your region.
+
+5. Click **Review + Create**.
+
+ ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png)
+
+6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace.
+
+ ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png)
+
+7. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+8. In **Monitoring** section, click **Diagnostic setting**.
+
+ ![Screenshot shows Diagnostic settings selected from Monitoring.](./media/tutorial-log-analytics-wizard/diagnostic-settings.png)
+
+9. On the **Diagnostic settings** page, click **Add diagnostic setting**.
+
+ ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png)
+
+10. On the **Diagnostic setting** page, perform the following steps:
+
+ ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png)
+
+ 1. Under **Category details**, select **AuditLogs** and **SigninLogs**.
+
+ 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace.
+
+ 3. Click **Save**.
+
+## Run queries
+
+This procedure shows how to run queries using the **Kusto Query Language (KQL)**.
++
+**To run a query:**
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Logs**.
+
+4. On the **Logs** page, click **Get Started**.
+
+5. In the **Search* textbox, type your query.
+
+6. Click **Run**.
++
+### KQL query examples
+
+Take 10 random entries from the input data:
+
+`SigninLogs | take 10`
+
+Look at the sign-ins where the Conditional Access was a success
+
+`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus`
++
+Count how many successes there have been
+
+`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count`
++
+Aggregate count of successful sign-ins by user by day:
+
+`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)`
++
+View how many times a user does a certain operation in specific time period:
+
+`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity`
++
+Pivot the results on operation name
+
+`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)`
++
+Merge together Audit and Sign in Logs using an inner join:
+
+`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated`
++
+View number of signs ins by client app type:
+
+`SigninLogs | summarize count() by ClientAppUsed`
+
+Count the sign ins by day:
+
+`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)`
+
+Take 5 random entries and project the columns you wish to see in the results:
+
+`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
++
+Take the top 5 in descending order and project the columns you wish to see
+
+`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
+
+Create a new column by combining the values to two other columns:
+
+`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed`
+
+## Create an alert rule
+
+This procedure shows how to send alerts when the breakglass account is used.
+
+**To create an alert rule:**
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Logs**.
+
+4. On the **Logs** page, click **Get Started**.
+
+5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName`
+
+6. Click **Run**.
+
+7. In the toolbar, click **New alert rule**.
+
+ ![New alert rule](./media/tutorial-log-analytics-wizard/new-alert-rule.png)
+
+8. On the **Create alert rule** page, verify that the scope is correct.
+
+9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count**
+
+ ![Default condition](./media/tutorial-log-analytics-wizard/default-condition.png)
+
+10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps:
+
+ ![Alert logic](./media/tutorial-log-analytics-wizard/alert-logic.png)
+
+ 1. As **Based on**, select **Number of results**.
+
+ 2. As **Operator**, select **Greater than**.
+
+ 3. As **Threshold value**, select **0**.
+
+11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps:
+
+ ![Evaluated based on](./media/tutorial-log-analytics-wizard/evaluated-based-on.png)
+
+ 1. As **Period (in minutes)**, select **5**.
+
+ 2. As **Frequency (in minutes)**, select **5**.
+
+ 3. Click **Done**.
+
+12. Under **Action group**, click **Select action group**.
+
+ ![Action group](./media/tutorial-log-analytics-wizard/action-group.png)
+
+13. On the **Select an action group to attach to this alert rule**, click **Create action group**.
+
+ ![Create action group](./media/tutorial-log-analytics-wizard/create-action-group.png)
+
+14. On the **Create action group** page, perform the following steps:
+
+ ![Instance details](./media/tutorial-log-analytics-wizard/instance-details.png)
+
+ 1. In the **Action group name** textbox, type **My action group**.
+
+ 2. In the **Display name** textbox, type **My action**.
+
+ 3. Click **Review + create**.
+
+ 4. Click **Create**.
++
+15. Under **Customize action**, perform the following steps:
+
+ ![Customize actions](./media/tutorial-log-analytics-wizard/customize-actions.png)
+
+ 1. Select **Email subject**.
+
+ 2. In the **Subject line** textbox, type: `Breakglass account has been used`
+
+16. Under **Alert rule details**, perform the following steps:
+
+ ![Alert rule details](./media/tutorial-log-analytics-wizard/alert-rule-details.png)
+
+ 1. In the **Alert rule name** textbox, type: `Breakglass account`
+
+ 2. In the **Description** textbox, type: `Your emergency access account has been used`
+
+17. Click **Create alert rule**.
++
+## Create a custom workbook
+
+This procedure shows how to create a new workbook using the quickstart template.
++++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Workbooks**.
+
+ ![Screenshot shows Monitoring in the Azure portal menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
+
+4. In the **Quickstart** section, click **Empty**.
+
+ ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png)
+
+5. Click **Add**.
+
+ ![Add workbook](./media/tutorial-log-analytics-wizard/add-workbook.png)
+
+6. Click **Add text**.
+
+ ![Add text](./media/tutorial-log-analytics-wizard/add-text.png)
++
+7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**.
+
+ ![Workbook text](./media/tutorial-log-analytics-wizard/workbook-text.png)
+
+8. In the new workbook, click **Add**, and then click **Add query**.
+
+ ![Add query](./media/tutorial-log-analytics-wizard/add-query.png)
+
+9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed`
+
+10. Click **Run Query**.
+
+ ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png)
+
+11. In the toolbar, under **Visualization**, click **Pie chart**.
+
+ ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png)
+
+12. Click **Done Editing**.
+
+ ![Done editing](./media/tutorial-log-analytics-wizard/done-workbook-editing.png)
+++
+## Add a query to a workbook template
+
+This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of conditional access success to failures.
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Workbooks**.
+
+ ![Screenshot shows Monitoring in the menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
+
+4. In the **conditional access** section, click **Conditional Access Insights and Reporting**.
+
+ ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png)
+
+5. In the toolbar, click **Edit**.
+
+ ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png)
+
+6. In the toolbar, click the three dots, then **Add**, and then **Add query**.
+
+ ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png)
+
+7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus`
+
+8. Click **Run Query**.
+
+ ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png)
+
+9. Click **Time Range**, and then select **Set in query**.
+
+10. Click **Visualization**, and then select **Bar chart**.
+
+11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**.
+
+ ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png)
++++++++
+## Next steps
+
+Advance to the next article to learn how to manage device identities by using the Azure portal.
+> [!div class="nextstepaction"]
+> [Monitoring](overview-monitoring.md)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Last updated 11/15/2022 -+
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Last updated 06/09/2023 -+
active-directory Admin Units Members Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md
Last updated 05/13/2022 -+
active-directory Admin Units Members List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md
Last updated 06/09/2023 -+
active-directory Admin Units Members Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md
Last updated 06/09/2023 -+
active-directory Assign Roles Different Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/assign-roles-different-scopes.md
Last updated 02/04/2022 -+
active-directory Concept Understand Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/concept-understand-roles.md
Last updated 04/22/2022 -+
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md
Last updated 05/10/2022 -+ # Assign custom roles with resource scope using PowerShell in Azure Active Directory
active-directory Custom Available Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-available-permissions.md
Last updated 11/04/2020 -+
active-directory Custom Consent Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-consent-permissions.md
Last updated 01/31/2023 -+ # App consent permissions for custom roles in Azure Active Directory
active-directory Custom Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-create.md
Last updated 12/09/2022 -+ # Create and assign a custom role in Azure Active Directory
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md
Last updated 01/31/2023 -+ # Enterprise application permissions for custom roles in Azure Active Directory
active-directory Custom Enterprise Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-apps.md
Last updated 02/04/2022 -+
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md
Last updated 04/10/2023 -+
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md
Last updated 04/10/2023 -+
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md
Last updated 04/10/2023 -+
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md
Last updated 02/04/2022 -+
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md
Last updated 08/08/2023 -+
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md
Last updated 02/06/2023 -+
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md
Last updated 03/17/2022 -+
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
+ Last updated 04/10/2023
active-directory Quickstart App Registration Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/quickstart-app-registration-limits.md
Last updated 02/04/2022 -+ # Quickstart: Grant permission to create unlimited app registrations
active-directory Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/role-definitions-list.md
Last updated 02/04/2022 -+
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md
-+
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md
Last updated 04/15/2022 -+ # List Azure AD role assignments
active-directory Azure Databricks With Private Link Workspace Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/azure-databricks-with-private-link-workspace-provisioning-tutorial.md
+
+ Title: Azure AD on-premises app provisioning to Azure Databricks with Private Link Workspace
+description: This article describes how to use the Azure AD provisioning service to provision users into Azure Databricks with Private Link Workspace.
++++++ Last updated : 08/10/2023++++
+# Microsoft Entra ID Application Provisioning to Azure Databricks with Private Link Workspace
+
+The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) client that can be used to automatically provision users into cloud or on-premises applications. This article outlines how you can use the Azure AD provisioning service to provision users into Azure Databricks workspaces with no public access.
+
+[ ![Diagram that shows SCIM architecture.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/scim-architecture.png)](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/scim-architecture.png#lightbox)
+
+## Prerequisites
+* An Azure AD tenant with Microsoft Entra ID Governance and Azure AD Premium P1 or Premium P2 (or EMS E3 or E5). To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/microsoft-entra-pricing).
+* Administrator role for installing the agent. This task is a one-time effort and should be an Azure account that's either a hybrid administrator or a global administrator.
+* Administrator role for configuring the application in the cloud (application administrator, cloud application administrator, global administrator, or a custom role with permissions).
+* A computer with at least 3 GB of RAM, to host a provisioning agent. The computer should have Windows Server 2016 or a later version of Windows Server, with connectivity to the target application, and with outbound connectivity to login.microsoftonline.com, other Microsoft Online Services and Azure domains. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy.
+
+## Download, install, and configure the Azure AD Connect Provisioning Agent Package
+
+If you have already downloaded the provisioning agent and configured it for another on-premises application, then continue reading in the next section.
+
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 1. On the left, select **Azure AD Connect**.
+ 1. On the left, select **Cloud sync**.
+ [![Screenshot of new UX screen.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/azure-active-directory-connect-new-ux.png)](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/azure-active-directory-connect-new-ux.png#lightbox)
+
+ 1. On the left, select **Agent**.
+ 1. Select **Download on-premises agent**, and select **Accept terms & download**.
+ >[!NOTE]
+ >Please use different provisioning agents for on-premises application provisioning and Azure AD Connect Cloud Sync / HR-driven provisioning. All three scenarios should not be managed on the same agent.
+ 1. Open the provisioning agent installer, agree to the terms of service, and select **next**.
+ 1. When the provisioning agent wizard opens, continue to the **Select Extension** tab and select **On-premises application provisioning** when prompted for the extension you want to enable.
+ 1. The provisioning agent uses the operating system's web browser to display a popup window for you to authenticate to Azure AD, and potentially also your organization's identity provider. If you're using Internet Explorer as the browser on Windows Server, then you may need to add Microsoft web sites to your browser's trusted site list to allow JavaScript to run correctly.
+ 1. Provide credentials for an Azure AD administrator when you're prompted to authorize. The user is required to have the Hybrid Identity Administrator or Global Administrator role.
+ 1. Select **Confirm** to confirm the setting. Once installation is successful, you can select **Exit**, and also close the Provisioning Agent Package installer.
+
+## Provisioning to SCIM-enabled Workspace
+Once the agent is installed, no further configuration is necessary on-premises, and all provisioning configurations are then managed from the Azure portal.
+
+ 1. In the Azure portal, navigate to the Enterprise applications and add the **On-premises SCIM app** from the [gallery](../manage-apps/add-application-portal.md).
+ 1. From the left hand menu, navigate to the **Provisioning** option and select **Get started**.
+ 1. Select **Automatic** from the dropdown list and expand the **On-Premises Connectivity** option.
+ 1. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**.
+ 1. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection.
+ 1. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolvable by DNS. An example for a scenario where the agent is installed on the same host as the application is `https://localhost:8585/scim`
+ ![Screenshot that shows assigning an agent.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial//on-premises-assign-agents.png)
+
+ 1. Create an Admin Token in Azure Databricks User Settings Console and enter the same in the **Secret Token** field
+ 1. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test fails. Use the steps [here](../app-provisioning/on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues.
+ >[!NOTE]
+ > If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the application contains the entire URL provided above.
+
+ 1. Configure any [attribute mappings](../app-provisioning/customize-application-attributes.md) or [scoping](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application.
+ 1. Add users to scope by [assigning users and groups](../manage-apps/add-application-portal-assign-users.md) to the application.
+ 1. Test provisioning a few users [on demand](../app-provisioning/provision-on-demand.md).
+ 1. Add more users into scope by assigning them to your application.
+ 1. Go to the **Provisioning** pane, and select **Start provisioning**.
+ 1. Monitor using the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md).
+
+The following video provides an overview of on-premises provisioning.
+> [!VIDEO https://www.youtube.com/embed/QdfdpaFolys]
+
+## More requirements
+* Ensure your [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) implementation meets the [Azure AD SCIM requirements](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+ Azure AD offers open-source [reference code](https://github.com/AzureAD/SCIMReferenceCode/wiki) that developers can use to bootstrap their SCIM implementation.
+* Support the /schemas endpoint to reduce configuration required in the Azure portal.
+
+## Next steps
+
+* [App provisioning](../app-provisioning/user-provisioning.md)
+* [Generic SQL connector](../app-provisioning/on-premises-sql-connector-configure.md)
+* [Tutorial: ECMA Connector Host generic SQL connector](../app-provisioning/tutorial-ecma-sql-connector.md)
+* [Known issues](../app-provisioning/known-issues.md)
active-directory Canva Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canva-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Canva for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Canva.
++
+writer: twimmers
+
+ms.assetid: 9bf62920-d9e0-4ed4-a4f6-860cb9563b00
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Canva for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Canva and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Canva](https://www.canva.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Canva.
+> * Remove users in Canva when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Canva.
+> * Provision groups and group memberships in Canva.
+> * [Single sign-on](canva-tutorial.md) to Canva (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Canva tenant.
+* A user account in Canva with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Canva](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Canva to support provisioning with Azure AD
+Contact Canva support to configure Canva to support provisioning with Azure AD.
+
+## Step 3. Add Canva from the Azure AD application gallery
+
+Add Canva from the Azure AD application gallery to start managing provisioning to Canva. If you have previously setup Canva for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Canva
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Canva in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Canva**.
+
+ ![Screenshot of the Canva link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Canva Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Canva. If the connection fails, ensure your Canva account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Canva**.
+
+1. Review the user attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Canva for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Canva API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Canva|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |externalId|String||
+ |emails[type eq "work"].value|String||&check;
+ |name.givenName|String||
+ |name.familyName|String||
+ |displayName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Canva**.
+
+1. Review the group attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Canva for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Canva|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Canva, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Canva by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Cloudbees Ci Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudbees-ci-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
| `https://cjoc.<CustomerDomain>/securityRealm/finishLogin` | | `https://<Environment>.<CustomerDomain>/securityRealm/finishLogin` |
-1. Perform the following step, if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign on URL** textbox, type the URL using one of the following patterns:
+ c. In the **Sign on URL** textbox, type the URL using one of the following patterns:
| **Sign on URL** | ||
active-directory Forcepoint Cloud Security Gateway Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Forcepoint Cloud Security Gateway - User Authentication.
++
+writer: twimmers
+
+ms.assetid: 415b2ba3-a9a5-439a-963a-7c2c0254ced1
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Forcepoint Cloud Security Gateway - User Authentication and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Forcepoint Cloud Security Gateway - User Authentication](https://admin.forcepoint.net) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Forcepoint Cloud Security Gateway - User Authentication.
+> * Remove users in Forcepoint Cloud Security Gateway - User Authentication when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Forcepoint Cloud Security Gateway - User Authentication.
+> * Provision groups and group memberships in Forcepoint Cloud Security Gateway - User Authentication.
+> * [Single sign-on](forcepoint-cloud-security-gateway-tutorial.md) to Forcepoint Cloud Security Gateway - User Authentication (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Forcepoint Cloud Security Gateway - User Authentication tenant.
+* A user account in Forcepoint Cloud Security Gateway - User Authentication with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Forcepoint Cloud Security Gateway - User Authentication](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD
+Contact Forcepoint Cloud Security Gateway - User Authentication support to configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD.
+
+## Step 3. Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery
+
+Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery to start managing provisioning to Forcepoint Cloud Security Gateway - User Authentication. If you have previously setup Forcepoint Cloud Security Gateway - User Authentication for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Forcepoint Cloud Security Gateway - User Authentication
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Forcepoint Cloud Security Gateway - User Authentication in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Forcepoint Cloud Security Gateway - User Authentication**.
+
+ ![Screenshot of the Forcepoint Cloud Security Gateway - User Authentication link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Forcepoint Cloud Security Gateway - User Authentication Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Forcepoint Cloud Security Gateway - User Authentication. If the connection fails, ensure your Forcepoint Cloud Security Gateway - User Authentication account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Forcepoint Cloud Security Gateway - User Authentication**.
+
+1. Review the user attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Forcepoint Cloud Security Gateway - User Authentication for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Forcepoint Cloud Security Gateway - User Authentication API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication|
+ |||||
+ |userName|String|&check;|&check;
+ |externalId|String||&check;
+ |displayName|String||&check;
+ |urn:ietf:params:scim:schemas:extension:forcepoint:2.0:User:ntlmId|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Forcepoint Cloud Security Gateway - User Authentication**.
+
+1. Review the group attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Forcepoint Cloud Security Gateway - User Authentication for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||
+ |members|Reference||
+
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Forcepoint Cloud Security Gateway - User Authentication, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Forcepoint Cloud Security Gateway - User Authentication by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
Previously updated : 11/21/2022 Last updated : 08/16/2023
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
- ## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft Configure and test Azure AD SSO with Google Cloud / G Suite Connector by Microsoft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Google Cloud / G Suite Connector by Microsoft.
active-directory Hive Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hive-tutorial.md
Previously updated : 11/21/2022 Last updated : 08/21/2023
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to Hive website as an administrator.
-1. Click on the **User Profile** and click **Your workspace**.
+1. Click on the **User Profile** and click your workspace **Settings**.
![Screenshot shows the Hive website with Your workspace selected from the menu.](./media/hive-tutorial/profile.png)
-1. Click **Auth** and perform the following steps:
+1. Click **Enterprise Security** and perform the following steps:
- ![Screenshot shows the Auth page where do the tasks described.](./media/hive-tutorial/authentication.png)
+ [![Screenshot shows the Auth page where do the tasks described.](./media/hive-tutorial/authentication.png)](./media/hive-tutorial/authentication.png#lightbox)
a. Copy **Your Workspace ID** and append it to the **SignOn URL** and **Reply URL** in the **Basic SAML Configuration Section** in the Azure portal.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Hive for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Hive tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Hive tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Hornbill Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hornbill-tutorial.md
Previously updated : 04/19/2023 Last updated : 08/16/2023 # Tutorial: Azure AD SSO integration with Hornbill
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://sso.hornbill.com/<INSTANCE_NAME>/<SUBDOMAIN>`
+`https://sso.hornbill.com/<INSTANCE_NAME>/live`
- b. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/`
+ > [!NOTE]
+ > If you are deploying the Hornbill Mobile Catalog to your organization, you will need to add an additional identifier URL, as so:
+`https://sso.hornbill.com/hornbill/mcatalog`
+
+ b. In the **Reply URL (Assertion Consumer Service URL)** section, add the following:
+`https://<API_SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/xmlmc/sso/saml2/authorize/user/live`
+
+ > [!NOTE]
+ > If you are deploying the Hornbill Mobile Catalog to your organization, you will need to add an additional Reply URL, as so:
+`https://<API_SUBDOMAIN>.hornbill.com/hornbill/xmlmc/sso/saml2/authorize/user/mcatalog`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+`https://live.hornbill.com/<INSTANCE_NAME>/`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Hornbill Client support team](https://www.hornbill.com/support/?request/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update the <INSTANCE_NAME> and <API_SUBDOMAIN> values with the actual values in the Identifier(s), Reply URL(s) and Sign on URL. These values can be retrieved from the Hornbill Solution Center in your Hornbill instance, under **_Your usage > Support_**. Contact [Hornbill Support](https://www.hornbill.com/support) for assistance in getting these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+6. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
![The Certificate download link](common/copy-metadataurl.png)
active-directory Hypervault Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hypervault-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Hypervault for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Hypervault.
++
+writer: twimmers
+
+ms.assetid: eca2ff9e-a09d-4bb4-88f6-6021a93d2c9d
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Hypervault for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Hypervault and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Hypervault](https://hypervault.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Hypervault.
+> * Remove users in Hypervault when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Hypervault.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Hypervault (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Hypervault with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Hypervault](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Hypervault to support provisioning with Azure AD
+Contact Hypervault support to configure Hypervault to support provisioning with Azure AD.
+
+## Step 3. Add Hypervault from the Azure AD application gallery
+
+Add Hypervault from the Azure AD application gallery to start managing provisioning to Hypervault. If you have previously setup Hypervault for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Hypervault
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Hypervault in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Hypervault**.
+
+ ![Screenshot of the Hypervault link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Hypervault**.
+
+1. Review the user attributes that are synchronized from Azure AD to Hypervault in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Hypervault for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Hypervault API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Hypervault|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |displayName|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |emails[type eq "work"].value|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Hypervault, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Hypervault by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Oneflow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oneflow-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Oneflow for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Oneflow.
++
+writer: twimmers
+
+ms.assetid: 6af89cdd-956c-4cc2-9a61-98afe7814470
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Oneflow for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Oneflow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oneflow](https://oneflow.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Oneflow.
+> * Remove users in Oneflow when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Oneflow.
+> * Provision groups and group memberships in Oneflow.
+> * [Single sign-on](oneflow-tutorial.md) to Oneflow (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Oneflow tenant.
+* A user account in Oneflow with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Oneflow](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Oneflow to support provisioning with Azure AD
+Contact Oneflow support to configure Oneflow to support provisioning with Azure AD.
+
+## Step 3. Add Oneflow from the Azure AD application gallery
+
+Add Oneflow from the Azure AD application gallery to start managing provisioning to Oneflow. If you have previously setup Oneflow for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Oneflow
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Oneflow in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Oneflow**.
+
+ ![Screenshot of the Oneflow link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Oneflow Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Oneflow. If the connection fails, ensure your Oneflow account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Oneflow**.
+
+1. Review the user attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Oneflow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Oneflow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Oneflow|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq \"work\"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |nickName|String||
+ |title|String||
+ |profileUrl|String||
+ |displayName|String||
+ |addresses[type eq \"work\"].streetAddress|String||
+ |addresses[type eq \"work\"].locality|String||
+ |addresses[type eq \"work\"].region|String||
+ |addresses[type eq \"work\"].postalCode|String||
+ |addresses[type eq \"work\"].country|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:adSourceAnchor|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute1|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute2|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute3|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute4|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute5|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:distinguishedName|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:domain|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:userPrincipalName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oneflow**.
+
+1. Review the group attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Oneflow for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Oneflow|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Oneflow, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Oneflow by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Postman Provisioning Tutorialy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-provisioning-tutorialy.md
+
+ Title: 'Tutorial: Configure Postman for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Postman.
++
+writer: twimmers
+
+ms.assetid: f3687101-9bec-4f18-9884-61833f4f58c3
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Postman for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Postman and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Postman](https://www.postman.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Postman.
+> * Remove users in Postman when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Postman.
+> * Provision groups and group memberships in Postman.
+> * [Single sign-on](postman-tutorial.md) to Postman (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Postman tenant.
+* A user account in Postman with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Postman](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Postman to support provisioning with Azure AD
+Contact Postman support to configure Postman to support provisioning with Azure AD.
+
+## Step 3. Add Postman from the Azure AD application gallery
+
+Add Postman from the Azure AD application gallery to start managing provisioning to Postman. If you have previously setup Postman for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Postman
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Postman in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Postman**.
+
+ ![Screenshot of the Postman link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Postman Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Postman. If the connection fails, ensure your Postman account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Postman**.
+
+1. Review the user attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Postman for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Postman API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Postman|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Postman**.
+
+1. Review the group attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Postman for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Postman|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Postman, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Postman by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Sap Cloud Platform Identity Authentication Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md
Title: 'Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning with Microsoft Entra ID'
-description: Learn how to configure Microsoft Entra ID to automatically provision and de-provision user accounts to SAP Cloud Identity Services.
+description: Learn how to configure Microsoft Entra ID to automatically provision and deprovision user accounts to SAP Cloud Identity Services.
writer: twimmers
# Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning
-The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Identity Services and Microsoft Entra ID (Azure AD) to configure Microsoft Entra ID to automatically provision and de-provision users to SAP Cloud Identity Services.
+This tutorial aims to demonstrate the steps for configuring Microsoft Entra ID (Azure AD) and SAP Cloud Identity Services. The goal is to set up Microsoft Entra ID to automatically provision and deprovision users to SAP Cloud Identity Services.
> [!NOTE] > This tutorial describes a connector built on top of the Microsoft Entra ID User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md).
Before configuring and enabling automatic user provisioning, you should decide w
## Important tips for assigning users to SAP Cloud Identity Services
-* It is recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. Additional users may be assigned later.
+* It's recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. More users may be assigned later.
* When assigning a user to SAP Cloud Identity Services, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
Before configuring and enabling automatic user provisioning, you should decide w
![Screenshot of the SAP Cloud Identity Services Add SCIM.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/configurationauth.png)
-1. You will receive an email to activate your account and set a password for **SAP Cloud Identity Services Service**.
+1. You'll get an email to activate your account and set up a password for the **SAP Cloud Identity Services Service**.
-1. Copy the **User ID** and **Password**. These values will be entered in the Admin Username and Admin Password fields respectively in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal.
+1. Copy the **User ID** and **Password**. These values are entered in the Admin Username and Admin Password fields respectively.
+This is done in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal.
## Add SAP Cloud Identity Services from the gallery
This section guides you through the steps to configure the Microsoft Entra ID pr
1. Review the user attributes that are synchronized from Microsoft Entra ID to SAP Cloud Identity Services in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Cloud Identity Services for update operations. Select the **Save** button to commit any changes.
- ![Screenshot of the SAP Business Technology Platform Identity Authentication User Attributes.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/userattributes.png)
+ |Attribute|Type|Supported for filtering|Required by SAP Cloud Identity Services|
+ |||||
+ |userName|String|&check;|&check;
+ |emails[type eq "work"].value|String||&check;
+ |active|Boolean||
+ |displayName|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
+ |addresses[type eq "work"].country|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].streetAddress|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |name.honorificPrefix|String||
+ |phoneNumbers[type eq "fax"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |locale|String||
+ |timezone|String||
+ |userType|String||
+ |company|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute1|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute2|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute3|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute4|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute5|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute6|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute7|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute8|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute9|String||
+ |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute10|String||
+ |sendMail|String||
+ |mailVerified|String||
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
This section guides you through the steps to configure the Microsoft Entra ID pr
![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
-1. When you are ready to provision, click **Save**.
+1. When you're ready to provision, click **Save**.
![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
For more information on how to read the Microsoft Entra ID provisioning logs, se
* SAP Cloud Identity Services's SCIM endpoint requires certain attributes to be of specific format. You can know more about these attributes and their specific format [here](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/b10fc6a9a37c488a82ce7489b1fab64c.html#).
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md)
active-directory Sap Fiori Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-fiori-tutorial.md
+ Last updated 11/21/2022
active-directory Sap Netweaver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md
+ Last updated 11/21/2022
active-directory Servicely Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicely-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Servicely for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Servicely.
++
+writer: twimmers
+
+ms.assetid: be3af02b-da77-4a88-bec3-e634e2af38b3
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Servicely for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Servicely and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Servicely](https://servicely.ai/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Servicely.
+> * Remove users in Servicely when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Servicely.
+> * Provision groups and group memberships in Servicely.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Servicely tenant.
+* A user account in Servicely with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Servicely](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Servicely to support provisioning with Azure AD
+Contact Servicely support to configure Servicely to support provisioning with Azure AD.
+
+## Step 3. Add Servicely from the Azure AD application gallery
+
+Add Servicely from the Azure AD application gallery to start managing provisioning to Servicely. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Servicely
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Servicely in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Servicely**.
+
+ ![Screenshot of the Servicely link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Servicely Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Servicely. If the connection fails, ensure your Servicely account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Servicely**.
+
+1. Review the user attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Servicely for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Servicely API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Servicely|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |title|String||
+ |preferredLanguage|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |timezone|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Servicely**.
+
+1. Review the group attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Servicely for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Servicely|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Servicely, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Servicely by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
+ Last updated 11/21/2022
active-directory Tailscale Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tailscale-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Tailscale](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Tailscale to support provisioning with Azure AD
-Contact Tailscale support to configure Tailscale to support provisioning with Azure AD.
+
+You need to be an [Owner, Admin, or IT admin](https://tailscale.com/kb/1138/user-roles/) in Tailscale to complete these steps. See [Tailscale plans](https://tailscale.com/pricing/)
+to find out which plans make user & group provisioning for Azure AD available.
+
+### Generate a SCIM API key in Tailscale.
+
+In the **[User management](https://login.tailscale.com/admin/settings/user-management/)** page of the admin console,
+
+1. Click **Enable Provisioning**.
+1. Copy the generated key to the clipboard.
+
+Save the key information in a secure spot. This is the Secret Token you will need to use it when you configure provisioning in Azure AD.
## Step 3. Add Tailscale from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who is provisioned based o
## Step 5. Configure automatic user provisioning to Tailscale
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Tailscale based on user assignments in Azure AD.
### To configure automatic user provisioning for Tailscale in Azure AD:
active-directory Tanium Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-sso-tutorial.md
Complete the following steps to enable Azure AD single sign-on in the Azure port
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium SSO support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+ > [!NOTE]
+ > If deploying Tanium in an on-premises configuration, your values may look different than those shown above. The values to use can be retrieved from the **Administration > SAML Configuration** menu in the Tanium console. Details can be found in the [Tanium Console User Guide: Integrating with a SAML IdP](https://docs.tanium.com/platform_user/platform_user/console_using_saml.html?cloud=false "Integrating with a SAML IdP Guide").
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. If deploying to Tanium in an on-premises configuration, click the edit button and set the **Response Signing Option** to "Sign response and assertion".
[ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ](common/copy-metadataurl.png#lightbox)
active-directory Vbrick Rev Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vbrick-rev-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Vbrick Rev Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Vbrick Rev Cloud to support provisioning with Azure AD
-Contact Vbrick Rev Cloud support to configure Vbrick Rev Cloud to support provisioning with Azure AD.
+
+1. Sign in to your **Rev Tenant**. Navigate to **Admin > Security Settings > User Security** in the navigation pane.
+
+ ![Screenshot of Vbrick Rev User Security Settings.](./media/vbrick-rev-cloud-provisioning-tutorial/app-navigations.png)
+
+1. Navigate to **Microsoft Azure AD SCIM** section of the page.
+
+ ![Screenshot of the Vbrick Rev User Security Settings with the Microsoft AD SCIM section called out.](./media/vbrick-rev-cloud-provisioning-tutorial/enable-azure-ad-scim.png)
+
+1. Enable **Microsoft Azure AD SCIM** and click on **Generate Token** button.
+ ![Screenshot of the Vbrick Rev User Security Settings with the Microsoft AD SCIM enable.](./media/vbrick-rev-cloud-provisioning-tutorial/rev-scim-manage.png)
+
+1. It will open a popup with the **URL** and the **JWT token**. Copy and save the **JWT token** and **URL** for next steps.
+
+ ![Screenshot of the Vbrick Rev User Security Settings with the Scim Token section called out.](./media/vbrick-rev-cloud-provisioning-tutorial/copy-token.png)
+
+1. Once you have a copy of the **JWT token** and **URL**, click **OK** to close the popup and then click on the **Save** button at the bottom of the settings page to enable SCIM for your tenant.
## Step 3. Add Vbrick Rev Cloud from the Azure AD application gallery
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
Last updated 1/3/2023 -+
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need
## How does revocation work?
-Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status.
+Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status.
### Verifiable credential data
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
For incorporating identity verification into your Apps, using AU10TIX ΓÇ£Govern
As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users.
-1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
+1. Go to [Microsoft Entra admin center -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
>[!NOTE] > Make sure this is the tenant you set up for Verified ID per the pre-requisites.
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
To incorporate identity verification into your Apps using LexisNexis Verified ID
As a developer you'll provide the steps below to your tenant administrator. The instructions help them obtain the verification request URL, and application body or website to request verifiable credentials from your users.
-1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
+1. Go to [Microsoft Entra admin center -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
>[!Note] > Make sure this is the tenant you set up for Verified ID per the pre-requisites. 1. Go to [Quickstart-> Verification Request -> Start](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/QuickStartVerifierBlade).
active-directory Partner Vu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md
Follow these steps to incorporate VU Identity Card solution into your Apps.
As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users.
-1. Go to Microsoft Entra portal - [**Verified ID**](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade)
+1. Go to Microsoft Entra admin center - [**Verified ID**](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade)
>[!NOTE] >Verify that the tenant configured for Verified ID meets the prerequisites.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Microsoft Entra Verified ID is now generally available (GA) as the new member of
### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by August 20, 2022.
+- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Microsoft Entra admin center. A fix for this issue should be available by August 20, 2022.
## July 2022
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust.md
Use the following values from your Azure AD application registration for your Gi
The following screenshot demonstrates how to copy the application ID and tenant ID.
- ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra portal.](./media/workload-identity-federation-create-trust/copy-client-id.png)
+ ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra admin center.](./media/workload-identity-federation-create-trust/copy-client-id.png)
- `AZURE_SUBSCRIPTION_ID` your subscription ID. To get the subscription ID, open **Subscriptions** in Azure portal and find your subscription. Then, copy the **Subscription ID**.
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
Previously updated : 07/04/2023 Last updated : 08/10/2023 # Configure Azure AI services virtual networks
-Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
+Azure AI services provide a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications that request data over the specified set of networks can access the account. You can limit access to your resources with *request filtering*, which allows requests that originate only from specified IP addresses, IP ranges, or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
An application that accesses an Azure AI services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key. > [!IMPORTANT]
-> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met:
+> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. To allow requests through, one of the following conditions needs to be met:
>
-> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Azure AI services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
-> * Or the request should originate from an allowed list of IP addresses.
+> - The request originates from a service that operates within an Azure Virtual Network on the allowed subnet list of the target Azure AI services account. The endpoint request that originated from the virtual network needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
+> - The request originates from an allowed list of IP addresses.
>
-> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
+> Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Scenarios
-To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients.
+To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks, including internet traffic, by default. Then, configure rules that grant access to traffic from specific virtual networks. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges and enable connections from specific internet or on-premises clients.
-Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. Once network rules are applied, they're enforced for all requests.
+Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data by using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. After network rules are applied, they're enforced for all requests.
## Supported regions and service offerings
-Virtual networks (VNETs) are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
+Virtual networks are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services support service tags for network rules configuration. The services listed here are included in the `CognitiveServicesManagement` service tag.
> [!div class="checklist"]
-> * Anomaly Detector
-> * Azure OpenAI
-> * Azure AI Vision
-> * Content Moderator
-> * Custom Vision
-> * Face
-> * Language Understanding (LUIS)
-> * Personalizer
-> * Speech service
-> * Language service
-> * QnA Maker
-> * Translator Text
-
+> - Anomaly Detector
+> - Azure OpenAI
+> - Content Moderator
+> - Custom Vision
+> - Face
+> - Language Understanding (LUIS)
+> - Personalizer
+> - Speech service
+> - Language
+> - QnA Maker
+> - Translator
> [!NOTE]
-> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
+> If you use Azure OpenAI, LUIS, Speech Services, or Language services, the `CognitiveServicesManagement` tag only enables you to use the service by using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal, Speech Studio, or Language Studio from a virtual network, you need to use the following tags:
>
-> * **AzureActiveDirectory**
-> * **AzureFrontDoor.Frontend**
-> * **AzureResourceManager**
-> * **CognitiveServicesManagement**
-> * **CognitiveServicesFrontEnd**
-
+> - `AzureActiveDirectory`
+> - `AzureFrontDoor.Frontend`
+> - `AzureResourceManager`
+> - `CognitiveServicesManagement`
+> - `CognitiveServicesFrontEnd`
## Change the default network access rule By default, Azure AI services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action. > [!WARNING]
-> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
+> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to *deny* blocks all access to the data unless specific network rules that *grant* access are also applied.
+>
+> Before you change the default rule to deny access, be sure to grant access to any allowed networks by using network rules. If you allow listing for the IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
-### Managing default network access rules
+### Manage default network access rules
You can manage default network access rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI.
You can manage default network access rules for Azure AI services resources thro
1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
- ![Virtual network option](media/vnet/virtual-network-blade.png)
+ :::image type="content" source="media/vnet/virtual-network-blade.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected." lightbox="media/vnet/virtual-network-blade.png":::
-1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Azure AI services resource.
-1. To allow traffic from all networks, choose to allow access from **All networks**.
+1. To deny access by default, under **Firewalls and virtual networks**, select **Selected Networks and Private Endpoints**.
- ![Virtual networks deny](media/vnet/virtual-network-deny.png)
+ With this setting alone, unaccompanied by configured virtual networks or address ranges, all access is effectively denied. When all access is denied, requests that attempt to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell, or the Azure CLI can still be used to configure the Azure AI services resource.
+
+1. To allow traffic from all networks, select **All networks**.
+
+ :::image type="content" source="media/vnet/virtual-network-deny.png" alt-text="Screenshot shows the Networking page with All networks selected." lightbox="media/vnet/virtual-network-deny.png":::
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
1. Display the status of the default rule for the Azure AI services resource.
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
- (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
- ```
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
+ (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
+ ```
-1. Set the default rule to deny network access by default.
+ You can get values for your resource group `myresourcegroup` and the name of your Azure services resource `myaccount` from the Azure portal.
+
+1. Set the default rule to deny network access.
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Deny
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "DefaultAction" = "Deny"
} Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ```
-1. Set the default rule to allow network access by default.
+1. Set the default rule to allow network access.
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Allow
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "DefaultAction" = "Allow"
} Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
1. Display the status of the default rule for the Azure AI services resource. ```azurecli-interactive az cognitiveservices account show \
- -g "myresourcegroup" -n "myaccount" \
- --query networkRuleSet.defaultAction
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --query properties.networkAcls.defaultAction
```
+1. Get the resource ID for use in the later steps.
+
+ ```azurecli-interactive
+ resourceId=$(az cognitiveservices account show
+ --resource-group "myresourcegroup" \
+ --name "myaccount" --query id --output tsv)
+ ```
+ 1. Set the default rule to deny network access by default. ```azurecli-interactive az resource update \
- --ids {resourceId} \
+ --ids $resourceId \
--set properties.networkAcls="{'defaultAction':'Deny'}" ```
You can manage default network access rules for Azure AI services resources thro
```azurecli-interactive az resource update \
- --ids {resourceId} \
+ --ids $resourceId \
--set properties.networkAcls="{'defaultAction':'Allow'}" ```
You can manage default network access rules for Azure AI services resources thro
## Grant access from a virtual network
-You can configure Azure AI services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Azure AD tenant.
+
+Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI services service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure AI services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure AI services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
+The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource to allow requests from specific subnets in a virtual network. Clients granted access by these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
-Each Azure AI services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range).
+Each Azure AI services resource supports up to 100 virtual network rules, which can be combined with IP network rules. For more information, see [Grant access from an internet IP range](#grant-access-from-an-internet-ip-range) later in this article.
-### Required permissions
+### Set required permissions
-To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
+To apply a virtual network rule to an Azure AI services resource, you need the appropriate permissions for the subnets to add. The required permission is the default *Contributor* role or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
-Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+The Azure AI services resource and the virtual networks that are granted access might be in different subscriptions, including subscriptions that are part of a different Azure AD tenant.
> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure AD tenant are currently supported only through PowerShell, the Azure CLI, and the REST APIs. You can view these rules in the Azure portal, but you can't configure them.
-### Managing virtual network rules
+### Configure virtual network rules
You can manage virtual network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. # [Azure portal](#tab/portal)
+To grant access to a virtual network with an existing network rule:
+ 1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
-1. Check that you've selected to allow access from **Selected networks**.
+1. Confirm that you selected **Selected Networks and Private Endpoints**.
-1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**.
+1. Under **Allow access from**, select **Add existing virtual network**.
- ![Add existing vNet](media/vnet/virtual-network-add-existing.png)
+ :::image type="content" source="media/vnet/virtual-network-add-existing.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add existing virtual network highlighted." lightbox="media/vnet/virtual-network-add-existing.png":::
1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**.
- ![Add existing vNet details](media/vnet/virtual-network-add-existing-details.png)
+ :::image type="content" source="media/vnet/virtual-network-add-existing-details.png" alt-text="Screenshot shows the Add networks dialog box where you can enter a virtual network and subnet.":::
-1. To create a new virtual network and grant it access, select **Add new virtual network**.
+ > [!NOTE]
+ > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
+ >
+ > Currently, only virtual networks that belong to the same Azure AD tenant are available for selection during rule creation. To grant access to a subnet in a virtual network that belongs to another tenant, use PowerShell, the Azure CLI, or the REST APIs.
- ![Add new vNet](media/vnet/virtual-network-add-new.png)
+1. Select **Save** to apply your changes.
+
+To create a new virtual network and grant it access:
+
+1. On the same page as the previous procedure, select **Add new virtual network**.
+
+ :::image type="content" source="media/vnet/virtual-network-add-new.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add new virtual network highlighted." lightbox="media/vnet/virtual-network-add-new.png":::
1. Provide the information necessary to create the new virtual network, and then select **Create**.
- ![Create vNet](media/vnet/virtual-network-create.png)
+ :::image type="content" source="media/vnet/virtual-network-create.png" alt-text="Screenshot shows the Create virtual network dialog box.":::
- > [!NOTE]
- > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
- >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs.
+1. Select **Save** to apply your changes.
-1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
+To remove a virtual network or subnet rule:
- ![Remove vNet](media/vnet/virtual-network-remove.png)
+1. On the same page as the previous procedures, select **...(More options)** to open the context menu for the virtual network or subnet, and select **Remove**.
+
+ :::image type="content" source="media/vnet/virtual-network-remove.png" alt-text="Screenshot shows the option to remove a virtual network." lightbox="media/vnet/virtual-network-remove.png":::
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
-1. List virtual network rules.
+1. List the configured virtual network rules.
```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
(Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules ```
-1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet.
```azurepowershell-interactive Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" ` -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `
- -AddressPrefix "10.0.0.0/24" `
+ -AddressPrefix "CIDR" `
-ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork ```
You can manage virtual network rules for Azure AI services resources through the
```azurepowershell-interactive $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myvnet"
} $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
You can manage virtual network rules for Azure AI services resources through the
``` > [!TIP]
- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ > To add a network rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified `VirtualNetworkResourceId` parameter in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
1. Remove a network rule for a virtual network and subnet. ```azurepowershell-interactive $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myvnet"
} $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -VirtualNetworkResourceId $subnet.Id
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "VirtualNetworkResourceId" = $subnet.Id
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
-1. List virtual network rules.
+1. List the configured virtual network rules.
```azurecli-interactive az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--query virtualNetworkRules ```
-1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet.
```azurecli-interactive
- az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \
+ az network vnet subnet update --resource-group "myresourcegroup" --name "mysubnet" \
--vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices" ``` 1. Add a network rule for a virtual network and subnet. ```azurecli-interactive
- $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ subnetid=$(az network vnet subnet show \
+ --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \
--query id --output tsv) # Use the captured subnet identifier as an argument to the network rule addition az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--subnet $subnetid ``` > [!TIP]
- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ > To add a rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified subnet ID in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
>
- > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant.
+ > You can use the `--subscription` parameter to retrieve the subnet ID for a virtual network that belongs to another Azure AD tenant.
1. Remove a network rule for a virtual network and subnet. ```azurecli-interactive $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \
--query id --output tsv) # Use the captured subnet identifier as an argument to the network rule removal az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--subnet $subnetid ``` *** > [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect.
## Grant access from an internet IP range
-You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic.
+You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, which effectively block general internet traffic.
-Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`.
+You can specify the allowed internet address ranges by using [CIDR format (RFC 4632)](https://tools.ietf.org/html/rfc4632) in the form `192.168.0.0/16` or as individual IP addresses like `192.168.0.1`.
> [!Tip]
- > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules.
+ > Small address ranges that use `/31` or `/32` prefix sizes aren't supported. Configure these ranges by using individual IP address rules.
+
+IP network rules are only allowed for *public internet* IP addresses. IP address ranges reserved for private networks aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. For more information, see [Private Address Space (RFC 1918)](https://tools.ietf.org/html/rfc1918#section-3).
+
+Currently, only IPv4 addresses are supported. Each Azure AI services resource supports up to 100 IP network rules, which can be combined with [virtual network rules](#grant-access-from-a-virtual-network).
-IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`.
+### Configure access from on-premises networks
-Only IPV4 addresses are supported at this time. Each Azure AI services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network).
+To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, identify the internet-facing IP addresses used by your network. Contact your network administrator for help.
-### Configuring access from on-premises networks
+If you use Azure ExpressRoute on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md).
-To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help.
+For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting.
-If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering)
+To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) use the Azure portal. For more information, see [NAT requirements for Azure public peering](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering).
### Managing IP network rules
You can manage IP network rules for Azure AI services resources through the Azur
1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
-1. Check that you've selected to allow access from **Selected networks**.
+1. Confirm that you selected **Selected Networks and Private Endpoints**.
-1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted.
+1. Under **Firewalls and virtual networks**, locate the **Address range** option. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)). Only valid public IP (nonreserved) addresses are accepted.
- ![Add IP range](media/vnet/virtual-network-add-ip-range.png)
+ :::image type="content" source="media/vnet/virtual-network-add-ip-range.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and the Address range highlighted." lightbox="media/vnet/virtual-network-add-ip-range.png":::
-1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
-
- ![Delete IP range](media/vnet/virtual-network-delete-ip-range.png)
+ To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
-1. List IP network rules.
+1. List the configured IP network rules.
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
(Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "ipaddress"
} Add-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "CIDR"
} Add-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "ipaddress"
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "CIDR"
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
-1. List IP network rules.
+1. List the configured IP network rules.
```azurecli-interactive az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" --query ipRules
+ --resource-group "myresourcegroup" --name "myaccount" --query ipRules
``` 1. Add a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "ipaddress"
``` 1. Add a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "CIDR"
``` 1. Remove a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "ipaddress"
``` 1. Remove a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "CIDR"
``` *** > [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect.
## Use private endpoints
-You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Azure AI services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
+You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network to securely access data over [Azure Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the virtual network address space for your Azure AI services resource. Network traffic between the clients on the virtual network and the resource traverses the virtual network and a private link on the Microsoft Azure backbone network, which eliminates exposure from the public internet.
Private endpoints for Azure AI services resources let you:
-* Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
-* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet.
-* Securely connect to Azure AI services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
+- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
+- Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network.
+- Securely connect to Azure AI services resources from on-premises networks that connect to the virtual network by using [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
-### Conceptual overview
+### Understand private endpoints
-A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Azure AI services service uses a secure private link.
+A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI services service uses a secure private link.
-Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
+Applications in the virtual network can connect to the service over the private endpoint seamlessly. Connections use the same connection strings and authorization mechanisms that they would use otherwise. The exception is Speech Services, which require a separate endpoint. For more information, see [Private endpoints with the Speech Services](#use-private-endpoints-with-the-speech-service) in this article. Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
-Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others.
+Private endpoints can be created in subnets that use service endpoints. Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-When you create a private endpoint for an Azure AI services resource in your VNet, a consent request is sent for approval to the Azure AI services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
+When you create a private endpoint for an Azure AI services resource in your virtual network, Azure sends a consent request for approval to the Azure AI services resource owner. If the user who requests the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
-Azure AI services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
+Azure AI services resource owners can manage consent requests and the private endpoints through the **Private endpoint connection** tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
-### Private endpoints
+### Specify private endpoints
-When creating the private endpoint, you must specify the Azure AI services resource it connects to. For more information on creating a private endpoint, see:
+When you create a private endpoint, specify the Azure AI services resource that it connects to. For more information on creating a private endpoint, see:
-* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md)
-* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md)
-* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using the Azure portal](../private-link/create-private-endpoint-portal.md)
+- [Create a private endpoint by using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using the Azure CLI](../private-link/create-private-endpoint-cli.md)
-### Connecting to private endpoints
+### Connect to private endpoints
> [!NOTE]
-> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwarder names.
+> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. For the correct zone and forwarder names, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
-Clients on a VNet using the private endpoint should use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Azure AI services resource over a private link.
+Clients on a virtual network that use the private endpoint use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech service, which requires a separate endpoint. For more information, see [Use private endpoints with the Speech service](#use-private-endpoints-with-the-speech-service) in this article. DNS resolution automatically routes the connections from the virtual network to the Azure AI services resource over a private link.
-We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
+By default, Azure creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. If you use your own DNS server, you might need to make more changes to your DNS configuration. For updates that might be required for private endpoints, see [Apply DNS changes for private endpoints](#apply-dns-changes-for-private-endpoints) in this article.
-### Private endpoints with the Speech Services
+### Use private endpoints with the Speech service
-See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md).
+See [Use Speech service through a private endpoint](Speech-Service/speech-services-private-link.md).
-### DNS changes for private endpoints
+### Apply DNS changes for private endpoints
-When you create a private endpoint, the DNS CNAME resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
+When you create a private endpoint, the DNS `CNAME` resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, Azure also creates a private DNS zone that corresponds to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. For more information, see [What is Azure Private DNS](../dns/private-dns-overview.md).
-When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
+When you resolve the endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When it's resolved from the virtual network hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
-This approach enables access to the Azure AI services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet.
+This approach enables access to the Azure AI services resource using the same connection string for clients in the virtual network that hosts the private endpoints and clients outside the virtual network.
-If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet.
+If you use a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network.
> [!TIP]
-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
+> When you use a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the `privatelink` subdomain to the private endpoint IP address. Delegate the `privatelink` subdomain to the private DNS zone of the virtual network. Alternatively, configure the DNS zone on your DNS server and add the DNS A records.
-For more information on configuring your own DNS server to support private endpoints, see the following articles:
+For more information on configuring your own DNS server to support private endpoints, see the following resources:
-* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
-* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+- [Name resolution that uses your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+- [DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration)
### Pricing
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
## Next steps
-* Explore the various [Azure AI services](./what-are-ai-services.md)
-* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
+- Explore the various [Azure AI services](./what-are-ai-services.md)
+- Learn more about [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
ai-services Changelog Release History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md
Previously updated : 07/18/2023 Last updated : 08/17/2023 monikerRange: '<=doc-intel-3.1.0'
This reference article provides a version-based description of Document Intellig
[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
-[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer)
+[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0)
[**ReadMe**](https://github.com/Azure/azure-sdk-for-jav)
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
monikerRange: 'doc-intel-3.1.0'
# Document Intelligence custom classification model
-**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-07-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)**.
+**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-07-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)**.
> [!IMPORTANT] >
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
monikerRange: '<=doc-intel-3.1.0'
[!INCLUDE [applies to v2.1](../includes/applies-to-v2-1.md)] ::: moniker-end
-Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file.
+Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that includes the relationships in the original file.
::: moniker range=">=doc-intel-3.0.0" In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
http {
```yml version: '3.3'
- nginx:
- image: nginx:alpine
- container_name: reverseproxy
- volumes:
- - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
- ports:
- - "5000:5000"
- layout:
- container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
- environment:
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
+ nginx:
+ image: nginx:alpine
+ container_name: reverseproxy
+ volumes:
+ - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf
+ ports:
+ - "5000:5000"
+ layout:
+ container_name: azure-cognitive-service-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest
+ environment:
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
- custom-template:
- container_name: azure-cognitive-service-custom-template
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
- restart: always
- depends_on:
- - layout
- environment:
- AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
- eula: accept
- apikey: ${FORM_RECOGNIZER_KEY}
- billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
- Logging:Console:LogLevel:Default: Information
- SharedRootFolder: /shared
- Mounts:Shared: /shared
- Mounts:Output: /logs
- volumes:
- - type: bind
- source: ${SHARED_MOUNT_PATH}
- target: /shared
- - type: bind
- source: ${OUTPUT_MOUNT_PATH}
- target: /logs
- expose:
- - "5000"
+ custom-template:
+ container_name: azure-cognitive-service-custom-template
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest
+ restart: always
+ depends_on:
+ - layout
+ environment:
+ AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000
+ eula: accept
+ apikey: ${FORM_RECOGNIZER_KEY}
+ billing: ${FORM_RECOGNIZER_ENDPOINT_URI}
+ Logging:Console:LogLevel:Default: Information
+ SharedRootFolder: /shared
+ Mounts:Shared: /shared
+ Mounts:Output: /logs
+ volumes:
+ - type: bind
+ source: ${SHARED_MOUNT_PATH}
+ target: /shared
+ - type: bind
+ source: ${OUTPUT_MOUNT_PATH}
+ target: /logs
+ expose:
+ - "5000"
- studio:
- container_name: form-recognizer-studio
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
- environment:
- ONPREM_LOCALFILE_BASEPATH: /onprem_folder
- STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
- volumes:
- - type: bind
- source: ${FILE_MOUNT_PATH} # path to your local folder
- target: /onprem_folder
- - type: bind
- source: ${DB_MOUNT_PATH} # path to your local folder
- target: /onprem_db
- ports:
- - "5001:5001"
- user: "1000:1000" # echo $(id -u):$(id -g)
+ studio:
+ container_name: form-recognizer-studio
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0
+ environment:
+ ONPREM_LOCALFILE_BASEPATH: /onprem_folder
+ STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db
+ volumes:
+ - type: bind
+ source: ${FILE_MOUNT_PATH} # path to your local folder
+ target: /onprem_folder
+ - type: bind
+ source: ${DB_MOUNT_PATH} # path to your local folder
+ target: /onprem_db
+ ports:
+ - "5001:5001"
+ user: "1000:1000" # echo $(id -u):$(id -g)
```
http {
2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. ```yml
- version: '3.3'
+version: '3.3'
nginx: image: nginx:alpine
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md
To get started, you need:
:::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal."::: > [!NOTE]
- > By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional).
+ > By default, the REST API uses documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional).
## Use the Azure portal
ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md
Follow these tips to further optimize your data set for training.
## Upload your training data
-When you've put together the set of form documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
+When you've put together the set of documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier.
If you want to use manually labeled data, upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
monikerRange: '<=doc-intel-3.1.0'
> [!NOTE] > Form Recognizer is now **Azure AI Document Intelligence**! >
-> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services.
+> * There are no changes to pricing.
+> * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs.
+> * There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
::: moniker range=">=doc-intel-3.0.0" [!INCLUDE [applies to v3.1, v3.0, and v2.1](includes/applies-to-v3-1-v3-0-v2-1.md)]
ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md
Previously updated : 07/18/2023 Last updated : 08/15/2023 zone_pivot_groups: programming-languages-set-formre monikerRange: '<=doc-intel-3.1.0'
monikerRange: '<=doc-intel-3.1.0'
# Get started with Document Intelligence [!INCLUDE [applies to v3.1 and v3.0](../includes/applies-to-v3-1-v3-0.md)]+
+> [!IMPORTANT]
+>
+> * Azure Cognitive Services Form Recognizer is now Azure AI Document Intelligence.
+> * Some platforms are still awaiting the renaming update.
+> * All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
++
+* Get started with Azure AI Document Intelligence latest GA version (v3.1).
+
+* Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents.
+
+* You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API.
+
+* For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page.
+ ::: moniker-end
-Get started with the latest version of Azure AI Document Intelligence. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure AI Document Intelligence GA version (3.0). Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page.
+> [!TIP]
+>
+> * For an enhance experience and advanced model quality, try the [Document Intelligence v3.1 (GA) quickstart](?view=doc-intel-3.1.0&preserve-view=true#get-started-with-document-intelligence) and [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) API version: 2023-07-31 (3.1 General Availability).
+ ::: moniker-end ::: zone pivot="programming-language-csharp"
To learn more about Document Intelligence features and development options, visi
::: zone-end ::: moniker range=">=doc-intel-3.0.0"+ That's it, congratulations!
-In this quickstart, you used a form Document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth.
+In this quickstart, you used a document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth.
## Next steps
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-csharp" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-java" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-javascript" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-python" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
To learn more about Document Intelligence features and development options, visi
::: zone pivot="programming-language-rest-api" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end
That's it, congratulations! In this quickstart, you used Document Intelligence m
## Next steps
-* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
+* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio).
* The v3.0 Studio supports any model trained with v2.1 labeled data.
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
CORS should now be configured to use the storage account from Document Intellige
:::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot of upload blob window in the Azure portal."::: > [!NOTE]
-> By default, the Studio will use form documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional)
+> By default, the Studio will use documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional)
## Custom models
ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-sample-label-tool.md
Title: "Quickstart: Label forms, train a model, and analyze forms using the Sample Labeling tool - Document Intelligence (formerly Form Recognizer)"
-description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label form documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
+description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
ai-services Sdk Overview V3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md
- Previously updated : 08/15/2023 Last updated : 08/17/2023 monikerRange: '>=doc-intel-3.0.0'
ai-services Sdk Overview V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md
- Previously updated : 08/11/2023 Last updated : 08/17/2023 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
# Document Intelligence SDK v3.1 (GA)
-**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31 ΓÇö v3.1 (GA)**.
+**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31ΓÇöv3.1 (GA)**.
Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Package| Supported API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Platform support | |:-:|:-|:-| :-:| | [**.NET/C# → 4.1.0 → latest GA release </br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[**Java → 4.1.0 → latest GA release</br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
+|[**Java → 4.1.0 → latest GA release</br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
|[**JavaScript → 5.0.0 → latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | |[**Python → 3.3.0 → latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [&bullet; 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> &bullet; [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-azure-function.md
Next, you'll add your own code to the Python script to call the Document Intelli
The following code parses the returned Document Intelligence response, constructs a .csv file, and uploads it to the **output** container. > [!IMPORTANT]
- > You will likely need to edit this code to match the structure of your own form documents.
+ > You will likely need to edit this code to match the structure of your own documents.
```python # The code below extracts the json format into tabular data.
ai-services V3 1 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-1-migration-guide.md
monikerRange: '<=doc-intel-3.1.0'
## Migrating from v3.1 preview API version
-Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2023-02-28-preview API version to the `2023-07-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md).
+Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2023-02-28-preview API version to the `2023-07-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview-v3-1.md).
The `2023-07-31` (GA) API has a few updates and changes from the preview API version:
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
Document Intelligence service is updated on an ongoing basis. Bookmark this page
## July 2023 > [!NOTE]
-> Form Recognizer is now Azure AI Document Intelligence!
+> Form Recognizer is now **Azure AI Document Intelligence**!
>
-> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names _Cognitive Services_ and _Azure Applied AI_ continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * Document, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services.
+> * There are no changes to pricing.
+> * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs.
+> * There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service.
**Document Intelligence v3.1 (GA)**
The v3.1 API introduces new and updated capabilities:
* Document Intelligence SDK version `4.0.0 GA` release * **Document Intelligence SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
- * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview.md).
+ * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview-v3-1.md).
* Update your applications using your programming language's **migration guide**.
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md
This will establish a private endpoint connection between language resource and
Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to an Azure AI services resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.-- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
- Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
ai-services Multi Service Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md
You can access Azure AI services through two different resources: A multi-servic
Azure AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications.
+## Supported services with a multi-service resource
+
+The multi-service resource enables access to the following Azure AI services with a single key and endpoint. Use these links to find quickstart articles, samples, and more to start using your resource.
+
+| Service | Description |
+| | |
+| ![Content Moderator icon](./media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content |
+| ![Custom Vision icon](./media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition to fit your business |
+| ![Document Intelligence icon](./media/service-icons/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into usable data at a fraction of the time and cost |
+| ![Face icon](./medi) | Detect and identify people and emotions in images |
+| ![Language icon](./media/service-icons/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities |
+| ![Speech icon](./media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition |
+| ![Translator icon](./media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects |
+| ![Vision icon](./media/service-icons/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos |
+ ::: zone pivot="azportal" [!INCLUDE [Azure Portal quickstart](includes/quickstarts/management-azportal.md)]
Azure AI services are represented by Azure [resources](../azure-resource-manager
## Next steps
-* Now that you have a resource, you can authenticate your API requests to the following Azure AI services. Use these links to find quickstart articles, samples and more to start using your resource.
- * [Content Moderator](./content-moderator/index.yml) (retired)
- * [Custom Vision](./custom-vision-service/index.yml)
- * [Document Intelligence](./document-intelligence/index.yml)
- * [Face](./computer-vision/overview-identity.md)
- * [Language](./language-service/index.yml)
- * [Speech](./speech-service/index.yml)
- * [Translator](./translator/index.yml)
- * [Vision](./computer-vision/index.yml)
+* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource).
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available wit
Previously updated : 08/02/2023 Last updated : 08/18/2023
These models can only be used with the Chat Completion API.
| | | | | | | `gpt-4` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 8,192 | September 2021 | | `gpt-4-32k` <sup>1,</sup><sup>2</sup> (0314) | | N/A | 32,768 | September 2021 |
-| `gpt-4` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, UK South | N/A | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, UK South | N/A | 32,768 | September 2021 |
+| `gpt-4` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Switzerland North, UK South | N/A | 8,192 | September 2021 |
+| `gpt-4-32k` <sup>1</sup><sup>3</sup> (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Switzerland North, UK South | N/A | 32,768 | September 2021 |
<sup>1</sup> The model is [only available by request](https://aka.ms/oai/get-gpt4).<br> <sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | - | -- | - | | `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Switzerland North, UK South | N/A | 4,096 | Sep 2021 |
+| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Switzerland North, UK South | N/A | 16,384 | Sep 2021 |
<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.
These models can only be used with Embedding API requests.
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |
-| text-embedding-ada-002 (version 2) | Canada East, East US, France Central, Japan East, North Central US, South Central US, West Europe | N/A |8,191 | Sep 2021 |
+| text-embedding-ada-002 (version 2) | Canada East, East US, France Central, Japan East, North Central US, South Central US, Switzerland North, UK South, West Europe | N/A |8,191 | Sep 2021 |
| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | ### DALL-E models (Preview)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 08/08/2023 Last updated : 08/17/2023 recommendations: false
Azure OpenAI on your data supports the following filetypes:
* Microsoft PowerPoint files * PDF
-There are some caveats about document structure and how it might affect the quality of responses from the model:
+There is an [upload limit](../quotas-limits.md), and there are some caveats about document structure and how it might affect the quality of responses from the model:
* The model provides the best citation titles from markdown (`.md`) files.
There are some caveats about document structure and how it might affect the qual
This will impact the quality of Azure Cognitive Search and the model response.
-## Virtual network support & private link support
+## Virtual network support & private network support
-Azure OpenAI on your data does not currently support private endpoints.
+If you have Azure OpenAI resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in five business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request.
++
+Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
+
+After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service. Private network access for Azure OpenAI Studio is not currently supported.
+
+### Azure OpenAI resources in private networks
+
+You can protect Azure OpenAI resource in [private networks](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI services.
+
+### Storage accounts in private networks
+
+Storage accounts in private networks are currently not supported by Azure OpenAI on your data.
## Azure Role-based access controls (Azure RBAC)
You can send a streaming request using the `stream` parameter, allowing data to
#### Conversation history for better results
-When chatting with a model, providing a history of the chat will help the model return higher quality results.
+When you chat with a model, providing a history of the chat will help the model return higher quality results.
```json {
ai-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/business-continuity-disaster-recovery.md
Previously updated : 6/21/2023 Last updated : 8/17/2023 recommendations: false
keywords:
# Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI Service
-Azure OpenAI is available in multiple regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
+Azure OpenAI is available in multiple regions. When you create an Azure OpenAI resource, you specify a region. From then on, your resource and all its operations stay associated with that Azure server region.
-It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
+It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either failover into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
-## Best practices
-
-Today customers will call the endpoint provided during deployment for both deployments and inference. These operations are stateless, so no data is lost in the case that a region becomes unavailable.
-
-If a region is non-operational customers must take steps to ensure service continuity.
+## BCDR requires custom code
-## Business continuity
+Today customers will call the endpoint provided during deployment for inferencing. Inferencing operations are stateless, so no data is lost if a region becomes unavailable.
-The following set of instructions applies both customers using default endpoints and those using custom endpoints.
+If a region is nonoperational customers must take steps to ensure service continuity.
-### Default endpoint recovery
+## BCDR for base model & customized model
-If you're using a default endpoint, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription.
+If you're using the base models, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription.
Follow these steps to configure your client to monitor errors:
-1. Use the [models page](../concepts/models.md) to identify the list of available regions for Azure OpenAI.
+1. Use the [models](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability) page to choose the datacenters and regions that are right for you.
-2. Select a primary and one secondary/backup regions from the list.
+2. Select a primary and one (or more) secondary/backup regions from the list.
-3. Create Azure OpenAI resources for each region selected.
+3. Create Azure OpenAI resources for each region(s) selected.
4. For the primary region and any backup regions your code will need to know:
- a. Base URI for the resource
-
- b. Regional access key or Azure Active Directory access
+ - Base URI for the resource
+ - Regional access key or Azure Active Directory access
-5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors).
+5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors).
- a. Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry.
-
- b. For persistence redirect traffic to the backup resource in the region you've created.
-
-## BCDR requires custom code
+ - Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry.
+ - For persistent connectivity issues, redirect traffic to the backup resource in the region(s) you've created.
-The recovery from regional failures for this usage type can be performed instantaneously and at a very low cost. This does however, require custom development of this functionality on the client side of your application.
+If you have fine-tuned a model in your primary region, you will need to retrain the base model in the secondary region(s) using the same training data. And then follow the above steps.
ai-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/completions.md
Title: 'How to generate text with Azure OpenAI Service'
-description: Learn how to generate or manipulate text, including code with Azure OpenAI
+description: Learn how to generate or manipulate text, including code by using a completion endpoint in Azure OpenAI Service.
Previously updated : 06/24/2022 Last updated : 08/15/2023 recommendations: false
keywords:
# Learn how to generate or manipulate text
-The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability.
+Azure OpenAI Service provides a **completion endpoint** that can be used for a wide variety of tasks. The endpoint supplies a simple yet powerful text-in, text-out interface to any [Azure OpenAI model](../concepts/models.md). To trigger the completion, you input some text as a prompt. The model generates the completion and attempts to match your context or pattern. Suppose you provide the prompt "As Descartes said, I think, therefore" to the API. For this prompt, Azure OpenAI returns the completion endpoint " I am" with high probability.
-The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
+The best way to start exploring completions is through the playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one:
-`write a tagline for an ice cream shop`
+```console
+write a tagline for an ice cream shop
+```
-once you submit, you'll see something like the following generated:
+After you enter your prompt, Azure OpenAI displays the completion:
-``` console
-write a tagline for an ice cream shop
+```console
we serve up smiles with every scoop! ```
-The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting.
+The completion results that you see can differ because the Azure OpenAI API produces fresh output for each interaction. You might get a slightly different completion each time you call the API, even if your prompt stays the same. You can control this behavior with the `Temperature` setting.
-This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
+The simple text-in, text-out interface means you can "program" the Azure OpenAI model by providing instructions or just a few examples of what you'd like it to do. The output success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a pre-teenage student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
> [!NOTE]
-> Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future.
+> The model training data can be different for each model type. The [latest model's training data currently extends through September 2021 only](/azure/ai-services/openai/concepts/models). Depending on your prompt, the model might not have knowledge of related current events.
-## Prompt design
+## Design prompts
-### Basics
+Azure OpenAI Service models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you must be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
-OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.
+The models try to predict what you want from the prompt. If you enter the prompt "Give me a list of cat breeds," the model doesn't automatically assume you're asking for a list only. You might be starting a conversation where your first words are "Give me a list of cat breeds" followed by "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
-The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.
+### Guidelines for creating robust prompts
-There are three basic guidelines to creating prompts:
+There are three basic guidelines for creating useful prompts:
-**Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want.
+- **Show and tell**. Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, include these details in your prompt to show the model.
-**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples ΓÇö the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response.
+- **Provide quality data**. If you're trying to build a classifier or get the model to follow a pattern, make sure there are enough examples. Be sure to proofread your examples. The model is smart enough to resolve basic spelling mistakes and give you a meaningful response. Conversely, the model might assume the mistakes are intentional, which can affect the response.
-**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls.
+- **Check your settings**. Probability settings, such as `Temperature` and `Top P`, control how deterministic the model is in generating a response. If you're asking for a response where there's only one right answer, you should specify lower values for these settings. If you're looking for a response that's not obvious, you might want to use higher values. The most common mistake users make with these settings is assuming they control "cleverness" or "creativity" in the model response.
-### Troubleshooting
+### Troubleshooting for prompt issues
-If you're having trouble getting the API to perform as expected, follow this checklist:
+If you're having trouble getting the API to perform as expected, review the following points for your implementation:
-1. Is it clear what the intended generation should be?
-2. Are there enough examples?
-3. Did you check your examples for mistakes? (The API won't tell you directly)
-4. Are you using temp and top_p correctly?
+- Is it clear what the intended generation should be?
+- Are there enough examples?
+- Did you check your examples for mistakes? (The API doesn't tell you directly.)
+- Are you using the `Temperature` and `Top P` probability settings correctly?
-## Classification
+## Classify text
-To create a text classifier with the API we provide a description of the task and provide a few examples. In this demonstration we show the API how to classify the sentiment of Tweets.
+To create a text classifier with the API, you provide a description of the task and provide a few examples. In this demonstration, you show the API how to classify the _sentiment_ of text messages. The sentiment expresses the overall feeling or expression in the text.
```console
-This is a tweet sentiment classifier
+This is a text message sentiment classifier
-Tweet: "I loved the new Batman movie!"
+Message: "I loved the new adventure movie!"
Sentiment: Positive
-Tweet: "I hate it when my phone battery dies."
+Message: "I hate it when my phone battery dies."
Sentiment: Negative
-Tweet: "My day has been 👍"
+Message: "My day has been 👍"
Sentiment: Positive
-Tweet: "This is the link to the article"
+Message: "This is the link to the article"
Sentiment: Neutral
-Tweet: "This new music video blew my mind"
+Message: "This new music video is unreal"
Sentiment: ```
-It's worth paying attention to several features in this example:
+### Guidelines for designing text classifiers
-**1. Use plain language to describe your inputs and outputs**
-We use plain language for the input "Tweet" and the expected output "Sentiment." For best practices, start with plain language descriptions. While you can often use shorthand or keys to indicate the input and output, when building your prompt it's best to start by being as descriptive as possible and then working backwards removing extra words as long as the performance to the prompt is consistent.
+This demonstration reveals several guidelines for designing classifiers:
-**2. Show the API how to respond to any case**
-In this example we provide multiple outcomes "Positive", "Negative" and "Neutral." A neutral outcome is important because there will be many cases where even a human would have a hard time determining if something is positive or negative and situations where it's neither.
+- **Use plain language to describe your inputs and outputs**. Use plain language for the input "Message" and the expected value that expresses the "Sentiment." For best practices, start with plain language descriptions. You can often use shorthand or keys to indicate the input and output when building your prompt, but it's best to start by being as descriptive as possible. Then you can work backwards and remove extra words as long as the performance to the prompt is consistent.
-**3. You can use text and emoji**
-The classifier is a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them.
+- **Show the API how to respond to any case**. The demonstration provides multiple outcomes: "Positive," "Negative," and "Neutral." Supporting a neutral outcome is important because there are many cases where even a human can have difficulty determining if something is positive or negative.
-**4. You need fewer examples for familiar tasks**
-For this classifier we only provided a handful of examples. This is because the API already has an understanding of sentiment and the concept of a tweet. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples.
+- **Use emoji and text, per the common expression**. The demonstration shows that the classifier can be a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them. For the best response, use common forms of expression for your examples.
-### Improving the classifier's efficiency
+- **Use fewer examples for familiar tasks**. This classifier provides only a handful of examples because the API already has an understanding of sentiment and the concept of a text message. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples.
-Now that we have a grasp of how to build a classifier, let's take that example and make it even more efficient so that we can use it to get multiple results back from one API call.
+### Multiple results from a single API call
-```
-This is a tweet sentiment classifier
+Now that you understand how to build a classifier, let's expand on the first demonstration to make it more efficient. You want to be able to use the classifier to get multiple results back from a single API call.
-Tweet: "I loved the new Batman movie!"
+```console
+This is a text message sentiment classifier
+
+Message: "I loved the new adventure movie!"
Sentiment: Positive
-Tweet: "I hate it when my phone battery dies"
+Message: "I hate it when my phone battery dies"
Sentiment: Negative
-Tweet: "My day has been 👍"
+Message: "My day has been 👍"
Sentiment: Positive
-Tweet: "This is the link to the article"
+Message: "This is the link to the article"
Sentiment: Neutral
-Tweet text
-1. "I loved the new Batman movie!"
+Message text
+1. "I loved the new adventure movie!"
2. "I hate it when my phone battery dies" 3. "My day has been 👍" 4. "This is the link to the article"
-5. "This new music video blew my mind"
+5. "This new music video is unreal"
-Tweet sentiment ratings:
+Message sentiment ratings:
1: Positive 2: Negative 3: Positive 4: Neutral 5: Positive
-Tweet text
-1. "I can't stand homework"
-2. "This sucks. I'm bored 😠"
-3. "I can't wait for Halloween!!!"
+Message text
+1. "He doesn't like homework"
+2. "The taxi is late. She's angry 😠"
+3. "I can't wait for the weekend!!!"
4. "My cat is adorable ❤️❤️"
-5. "I hate chocolate"
+5. "Let's try chocolate bananas"
-Tweet sentiment ratings:
+Message sentiment ratings:
1. ```
-After showing the API how tweets are classified by sentiment we then provide it a list of tweets and then a list of sentiment ratings with the same number index. The API is able to pick up from the first example how a tweet is supposed to be classified. In the second example it sees how to apply this to a list of tweets. This allows the API to rate five (and even more) tweets in just one API call.
-
-It's important to note that when you ask the API to create lists or evaluate text you need to pay extra attention to your probability settings (Top P or Temperature) to avoid drift.
-
-1. Make sure your probability setting is calibrated correctly by running multiple tests.
+This demonstration shows the API how to classify text messages by sentiment. You provide a numbered list of messages and a list of sentiment ratings with the same number index. The API uses the information in the first demonstration to learn how to classify sentiment for a single text message. In the second demonstration, the model learns how to apply the sentiment classification to a list of text messages. This approach allows the API to rate five (and even more) text messages in a single API call.
-2. Don't make your list too long or the API is likely to drift.
+> [!IMPORTANT]
+> When you ask the API to create lists or evaluate text, it's important to help the API avoid drift. Here are some points to follow:
+>
+> - Pay careful attention to your values for the `Top P` or `Temperature` probability settings.
+> - Run multiple tests to make sure your probability settings are calibrated correctly.
+> - Don't use long lists. Long lists can lead to drift.
-
+## Trigger ideas
-## Generation
+One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. Suppose you're writing a mystery novel and you need some story ideas. You can give the API a list of a few ideas and it tries to add more ideas to your list. The API can create business plans, character descriptions, marketing slogans, and much more from just a small handful of examples.
-One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. You can give the API a list of a few story ideas and it will try to add to that list. We've seen it create business plans, character descriptions and marketing slogans just by providing it a handful of examples. In this demonstration we'll use the API to create more examples for how to use virtual reality in the classroom:
+In the next demonstration, you use the API to create more examples for how to use virtual reality in the classroom:
-```
+```console
Ideas involving education and virtual reality 1. Virtual Mars
Students get to explore Mars via virtual reality and go on missions to collect a
2. ```
-All we had to do in this example is provide the API with just a description of what the list is about and one example. We then prompted the API with the number `2.` indicating that it's a continuation of the list.
+This demonstration provides the API with a basic description for your list along with one list item. Then you use an incomplete prompt of "2." to trigger a response from the API. The API interprets the incomplete entry as a request to generate similar items and add them to your list.
-Although this is a very simple prompt, there are several details worth noting:
+### Guidelines for triggering ideas
-**1. We explained the intent of the list**<br>
-Just like with the classifier, we tell the API up front what the list is about. This helps it focus on completing the list and not trying to guess what the pattern is behind it.
+Although this demonstration uses a simple prompt, it highlights several guidelines for triggering new ideas:
-**2. Our example sets the pattern for the rest of the list**<br>
-Because we provided a one-sentence description, the API is going to try to follow that pattern for the rest of the items it adds to the list. If we want a more verbose response, we need to set that up from the start.
+- **Explain the intent of the list**. Similar to the demonstration for the text classifier, you start by telling the API what the list is about. This approach helps the API to focus on completing the list rather than trying to determine patterns by analyzing the text.
-**3. We prompt the API by adding an incomplete entry**<br>
-When the API sees `2.` and the prompt abruptly ends, the first thing it tries to do is figure out what should come after it. Since we already had an example with number one and gave the list a title, the most obvious response is to continue adding items to the list.
+- **Set the pattern for the items in the list**. When you provide a one-sentence description, the API tries to follow that pattern when generating new items for the list. If you want a more verbose response, you need to establish that intent with more detailed text input to the API.
-**Advanced generation techniques**<br>
-You can improve the quality of the responses by making a longer more diverse list in your prompt. One way to do that is to start off with one example, let the API generate more and select the ones that you like best and add them to the list. A few more high-quality variations can dramatically improve the quality of the responses.
+- **Prompt the API with an incomplete entry to trigger new ideas**. When the API encounters text that seems incomplete, such as the prompt text "2.," it first tries to determine any text that might complete the entry. Because the demonstration had a list title and an example with the number "1." and accompanying text, the API interpreted the incomplete prompt text "2." as a request to continue adding items to the list.
-
+- **Explore advanced generation techniques**. You can improve the quality of the responses by making a longer more diverse list in your prompt. One approach is to start with one example, let the API generate more examples, and then select the examples you like best and add them to the list. A few more high-quality variations in your examples can dramatically improve the quality of the responses.
-## Conversation
+## Conduct conversations
-The API is extremely adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, we've seen the API perform as a customer service chatbot that intelligently answers questions without ever getting flustered or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples.
+Starting with the release of [GPT-35-Turbo and GPT-4](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), we recommend that you create conversational generation and chatbots by using models that support the _chat completion endpoint_. The chat completion models and endpoint require a different input structure than the completion endpoint.
-Here's an example of the API playing the role of an AI answering questions:
+The API is adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, the API can perform as a customer service chatbot that intelligently answers questions without getting flustered, or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples.
-```
+In this demonstration, the API supplies the role of an AI answering questions:
+
+```console
The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly. Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human: ```
-This is all it takes to create a chatbot capable of carrying on a conversation. But underneath its simplicity there are several things going on that are worth paying attention to:
-
-**1. We tell the API the intent but we also tell it how to behave**
-Just like the other prompts, we cue the API into what the example represents, but we also add another key detail: we give it explicit instructions on how to interact with the phrase "The assistant is helpful, creative, clever, and very friendly."
-
-Without that instruction the API might stray and mimic the human it's interacting with and become sarcastic or some other behavior we want to avoid.
-
-**2. We give the API an identity**
-At the start we have the API respond as an AI that was created by OpenAI. While the API has no intrinsic identity, this helps it respond in a way that's as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots. If you tell the API to respond as a woman who works as a research scientist in biology, you'll get intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background.
+Let's look at a variation for a chatbot named "Cramer," an amusing and somewhat helpful virtual assistant. To help the API understand the character of the role, you provide a few examples of questions and answers. All it takes is just a few sarcastic responses and the API can pick up the pattern and provide an endless number of similar responses.
-In this example we create a chatbot that is a bit sarcastic and reluctantly answers questions:
-
-```
-Marv is a chatbot that reluctantly answers questions.
+```console
+Cramer is a chatbot that reluctantly answers questions.
### User: How many pounds are in a kilogram?
-Marv: This again? There are 2.2 pounds in a kilogram. Please make a note of this.
+Cramer: This again? There are 2.2 pounds in a kilogram. Please make a note of this.
### User: What does HTML stand for?
-Marv: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future.
+Cramer: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future.
### User: When did the first airplane fly?
-Marv: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away.
+Cramer: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away.
### User: Who was the first man in space?
-Marv:
+Cramer:
```
-To create an amusing and somewhat helpful chatbot we provide a few examples of questions and answers showing the API how to reply. All it takes is just a few sarcastic responses and the API is able to pick up the pattern and provide an endless number of snarky responses.
+### Guidelines for designing conversations
-
+Our demonstrations show how easily you can create a chatbot that's capable of carrying on a conversation. Although it looks simple, this approach follows several important guidelines:
-## Transformation
+- **Define the intent of the conversation**. Just like the other prompts, you describe the intent of the interaction to the API. In this case, "a conversation." This input prepares the API to process subsequent input according to the initial intent.
-The API is a language model that is familiar with a variety of ways that words and characters can be used to express information. This ranges from natural language text to code and languages other than English. The API is also able to understand content on a level that allows it to summarize, convert and express it in different ways.
+- **Tell the API how to behave**. A key detail in this demonstration is the explicit instructions for how the API should interact: "The assistant is helpful, creative, clever, and very friendly." Without your explicit instructions, the API might stray and mimic the human it's interacting with. The API might become unfriendly or exhibit other undesirable behavior.
-### Translation
+- **Give the API an identity**. At the start, you have the API respond as an AI created by OpenAI. While the API has no intrinsic identity, the character description helps the API respond in a way that's as close to the truth as possible. You can use character identity descriptions in other ways to create different kinds of chatbots. If you tell the API to respond as a research scientist in biology, you receive intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background.
-In this example we show the API how to convert from English to French:
+## Transform text
-```
+The API is a language model that's familiar with various ways that words and character identities can be used to express information. The knowledge data supports transforming text from natural language into code, and translating between other languages and English. The API is also able to understand content on a level that allows it to summarize, convert, and express it in different ways. Let's look at a few examples.
+
+### Translate from one language to another
+
+This demonstration instructs the API on how to convert English language phrases into French:
+
+```console
English: I do not speak French. French: Je ne parle pas français. English: See you later!
French: Quelles chambres avez-vous de disponible?
English: ```
-This example works because the API already has a grasp of French, so there's no need to try to teach it this language. Instead, we just need to provide enough examples that API understands that it's converting from one language to another.
+This example works because the API already has a grasp of the French language. You don't need to try to teach the language to the API. You just need to provide enough examples to help the API understand your request to convert from one language to another.
-If you want to translate from English to a language the API is unfamiliar with you'd need to provide it with more examples and a fine-tuned model to do it fluently.
+If you want to translate from English to a language the API doesn't recognize, you need to provide the API with more examples and a fine-tuned model that can produce fluent translations.
-### Conversion
+### Convert between text and emoji
-In this example we convert the name of a movie into emoji. This shows the adaptability of the API to picking up patterns and working with other characters.
+This demonstration converts the name of a movie from text into emoji characters. This example shows the adaptability of the API to pick up patterns and work with other characters.
-```
-Back to Future: 👨👴🚗🕒
-Batman: 🤵🦇
-Transformers: 🚗🤖
-Wonder Woman: 👸🏻👸🏼👸🏽👸🏾👸🏿
-Spider-Man: 🕸🕷🕸🕸🕷🕸
-Winnie the Pooh: 🐻🐼🐻
-The Godfather: 👨👩👧🕵🏻‍♂️👲💥
-Game of Thrones: 🏹🗡🗡🏹
-Spider-Man:
+```console
+Carpool Time: 👨👴👩🚗🕒
+Robots in Cars: 🚗🤖
+Super Femme: 👸🏻👸🏼👸🏽👸🏾👸🏿
+Webs of the Spider: 🕸🕷🕸🕸🕷🕸
+The Three Bears: 🐻🐼🐻
+Mobster Family: 👨👩👧🕵🏻‍♂️👲💥
+Arrows and Swords: 🏹🗡🗡🏹
+Snowmobiles:
```
-## Summarization
+### Summarize text
-The API is able to grasp the context of text and rephrase it in different ways. In this example, the API takes a block of text and creates an explanation a child would understand. This illustrates that the API has a deep grasp of language.
+The API can grasp the context of text and rephrase it in different ways. In this demonstration, the API takes a block of text and creates an explanation that's understandable by a primary-age child. This example illustrates that the API has a deep grasp of language.
-```
+```console
My ten-year-old asked me what this passage means: """ A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich.[1] Neutron stars are the smallest and densest stellar objects, excluding black holes and hypothetical white holes, quark stars, and strange stars.[2] Neutron stars have a radius on the order of 10 kilometres (6.2 mi) and a mass of about 1.4 solar masses.[3] They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.
I rephrased it for him, in plain language a ten-year-old can understand:
""" ```
-In this example we place whatever we want summarized between the triple quotes. It's worth noting that we explain both before and after the text to be summarized what our intent is and who the target audience is for the summary. This is to keep the API from drifting after it processes a large block of text.
+### Guidelines for producing text summaries
-## Completion
+Text summarization often involves supplying large amounts of text to the API. To help prevent the API from drifting after it processes a large block of text, follow these guidelines:
-While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. For example, if given this prompt, the API will continue the train of thought about vertical farming. You can lower the temperature setting to keep the API more focused on the intent of the prompt or increase it to let it go off on a tangent.
+- **Enclose the text to summarize within triple double quotes**. In this example, you enter three double quotes (""") on a separate line before and after the block of text to summarize. This formatting style clearly defines the start and end of the large block of text to process.
-```
+- **Explain the summary intent and target audience before, and after summary**. Notice that this example differs from the others because you provide instructions to the API two times: before, and after the text to process. The redundant instructions help the API to focus on your intended task and avoid drift.
+
+## Complete partial text and code inputs
+
+While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off.
+
+In this demonstration, you supply a text prompt to the API that appears to be incomplete. You stop the text entry on the word "and." The API interprets the incomplete text as a trigger to continue your train of thought.
+
+```console
Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ```
-This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/legacy-models.md#codex-models) section in [Models](../concepts/models.md).
+This next demonstration shows how you can use the completion feature to help write `React` code components. You begin by sending some code to the API. You stop the code entry with an open parenthesis `(`. The API interprets the incomplete code as a trigger to complete the `HeaderComponent` constant definition. The API can complete this code definition because it has an understanding of the corresponding `React` library.
-```
+```python
import React from 'react'; const HeaderComponent = () => ( ``` -
+### Guidelines for generating completions
-## Factual responses
+Here are some helpful guidelines for using the API to generate text and code completions:
-The API has a lot of knowledge that it's learned from the data it was trained on. It also has the ability to provide responses that sound very real but are in fact made up. There are two ways to limit the likelihood of the API making up an answer.
+- **Lower the Temperature to keep the API focused**. Set lower values for the `Temperature` setting to instruct the API to provide responses that are focused on the intent described in your prompt.
-**1. Provide a ground truth for the API**
-If you provide the API with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to confabulate a response.
+- **Raise the Temperature to allow the API to tangent**. Set higher values for the `Temperature` setting to allow the API to respond in a manner that's tangential to the intent described in your prompt.
-**2. Use a low probability and show the API how to say "I don't know"**
-If the API understands that in cases where it's less certain about a response that saying "I don't know" or some variation is appropriate, it will be less inclined to make up answers.
+- **Use the GPT-35-Turbo and GPT-4 Azure OpenAI models**. For tasks that involve understanding or generating code, Microsoft recommends using the `GPT-35-Turbo` and `GPT-4` Azure OpenAI models. These models use the new [chat completions format](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions).
+
+## Generate factual responses
-In this example we give the API examples of questions and answers it knows and then examples of things it wouldn't know and provide question marks. We also set the probability to zero so the API is more likely to respond with a "?" if there's any doubt.
+The API has learned knowledge that's built on actual data reviewed during its training. It uses this learned data to form its responses. However, the API also has the ability to respond in a way that sounds true, but is in fact, fabricated.
-```
+There are a few ways you can limit the likelihood of the API making up an answer in response to your input. You can define the foundation for a true and factual response, so the API drafts its response from your data. You can also set a low `Temperature` probability value and show the API how to respond when the data isn't available for a factual answer.
+
+The following demonstration shows how to teach the API to reply in a more factual manner. You provide the API with examples of questions and answers it understands. You also supply examples of questions ("Q") it might not recognize and use a question mark for the answer ("A") output. This approach teaches the API how to respond to questions it can't answer factually.
+
+As a safeguard, you set the `Temperature` probability to zero so the API is more likely to respond with a question mark (?) if there's any doubt about the true and factual response.
+
+```console
Q: Who is Batman? A: Batman is a fictional comic book character.
Q: What is Devz9?
A: ? Q: Who is George Lucas?
-A: George Lucas is American film director and producer famous for creating Star Wars.
+A: George Lucas is an American film director and producer famous for creating Star Wars.
Q: What is the capital of California? A: Sacramento.
A: Sacramento.
Q: What orbits the Earth? A: The Moon.
-Q: Who is Fred Rickerson?
+Q: Who is Egad Debunk?
A: ? Q: What is an atom?
A: Two, Phobos and Deimos.
Q: ```
-## Working with code
+
+### Guidelines for generating factual responses
+
+Let's review the guidelines to help limit the likelihood of the API making up an answer:
+
+- **Provide a ground truth for the API**. Instruct the API about what to use as the foundation for creating a true and factual response based on your intent. If you provide the API with a body of text to use to answer questions (like a Wikipedia entry), the API is less likely to fabricate a response.
+
+- **Use a low probability**. Set a low `Temperature` probability value so the API stays focused on your intent and doesn't drift into creating a fabricated or confabulated response.
+
+- **Show the API how to respond with "I don't know"**. You can enter example questions and answers that teach the API to use a specific response for questions for which it can't find a factual answer. In the example, you teach the API to respond with a question mark (?) when it can't find the corresponding data. This approach also helps the API to learn when responding with "I don't know" is more "correct" than making up an answer.
+
+## Work with code
The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
-Learn more about generating code completions, with the [working with code guide](./work-with-code.md)
+For more information about generating code completions, see [Codex models and Azure OpenAI Service](./work-with-code.md).
## Next steps
-Learn [how to work with code (Codex)](./work-with-code.md).
-Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).
+- Learn how to work with the [GPT-35-Turbo and GPT-4 models](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions).
+- Learn more about the [Azure OpenAI Service models](../concepts/models.md).
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
if response_message.get("function_call"):
messages.append( # adding assistant response to messages { "role": response_message["role"],
- "name": response_message["function_call"]["name"],
- "content": response_message["function_call"]["arguments"],
+ "function_call": {
+ "name": response_message["function_call"]["name"],
+ "arguments": response_message["function_call"]["arguments"],
+ },
+ "content": None
} ) messages.append( # adding function response to messages
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
| Total size of all files per resource | 1 GB | | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion |
+| Max size of all files per upload (Azure OpenAI on your data) | 16 MB |
+ <sup>1</sup> Default quota limits are subject to change.
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \--H "chatgpt_url: YOUR_RESOURCE_URL" \--H "chatgpt_key: YOUR_API_KEY" \ -d \ ' {
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
keywords:
## August 2023 - You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).
+- [Azure OpenAI on your data](./concepts/use-your-data.md#virtual-network-support--private-network-support) now supports private endpoints.
## July 2023
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md
The Cognitive Search instance can be isolated via a private endpoint after the Q
Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to the Azure AI service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.-- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
- Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Batch transcription requests for expired models will fail with a 4xx error. You'
The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
-You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic.
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md).
ai-services Get Started Stt Diarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md
Last updated 7/27/2023
-zone_pivot_groups: programming-languages-set-twenty-two
+zone_pivot_groups: programming-languages-speech-services
keywords: speech to text, speech to text software
keywords: speech to text, speech to text software
[!INCLUDE [C++ include](includes/quickstarts/stt-diarization/cpp.md)] ::: zone-end + ::: zone pivot="programming-language-java" [!INCLUDE [Java include](includes/quickstarts/stt-diarization/java.md)] ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python include](includes/quickstarts/stt-diarization/python.md)] ::: zone-end +++ ## Next steps > [!div class="nextstepaction"]
ai-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md
Last updated 06/22/2022 zone_pivot_groups: programming-languages-set-three- # Configure OpenSSL for Linux
ai-services How To Track Speech Sdk Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md
ms.devlang: cpp, csharp, java, objective-c, python zone_pivot_groups: programming-languages-set-two- # How to track Speech SDK memory usage
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md
With the cross-lingual feature, you can transfer your custom neural voice model
# [Pronunciation assessment](#tab/pronunciation-assessment)
-The table in this section summarizes the 20 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 19 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
+The table in this section summarizes the 21 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 20 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
[!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
The base model may not be sufficient if the audio contains ambient noise or incl
With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings
+- [Diarization](get-started-stt-diarization.md)
+- [Pronunciation assessment](how-to-pronunciation-assessment.md)
- Contact center agent assist - Dictation - Voice agents-- Pronunciation assessment ### Batch transcription
ai-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md
Audio is sent in the body of the HTTP `POST` request. It must be in one of the f
| Format | Codec | Bit rate | Sample rate | |--|-|-|--| | WAV | PCM | 256 kbps | 16 kHz, mono |
-| OGG | OPUS | 256 kpbs | 16 kHz, mono |
+| OGG | OPUS | 256 kbps | 16 kHz, mono |
> [!NOTE] > The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
ai-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md
Use these parameters instead of the parameters in the article that you chose:
| Resource | **\<your-speech-resource-name>** | | Target sub-resource | **account** |
-**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
+**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#apply-dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
### Resolve DNS from the virtual network
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
These limits aren't adjustable.
| Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |
-| Max number of simultaneous model trainings | N/A | 3 |
+| Max number of simultaneous model trainings | N/A | 4 |
| Max number of custom endpoints | N/A | 50 | #### Audio Content Creation tool
ai-services Deploy User Managed Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/deploy-user-managed-glossary.md
+
+ Title: Deploy a user-managed glossary in Translator container
+
+description: How to deploy a user-managed glossary in the Translator container environment.
++++++ Last updated : 08/15/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD046 -->
+
+# Deploy a user-managed glossary
+
+Microsoft Translator containers enable you to run several features of the Translator service in your own environment and are great for specific security and data governance requirements.
+
+There may be times when you're running a container with a multi-layered ingestion process when you discover that you need to implement an update to sentence and/or phrase files. Since the standard phrase and sentence files are encrypted and read directly into memory at runtime, you need to implement a quick-fix engineering solution to implement a dynamic update. This update can be implemented using our user-managed glossary feature:
+
+* To deploy the **phrase&#8203;fix** solution, you need to create a **phrase&#8203;fix** glossary file to specify that a listed phrase is translated in a specified way.
+
+* To deploy the **sent&#8203;fix** solution, you need to create a **sent&#8203;fix** glossary file to specify an exact target translation for a source sentence.
+
+* The **phrase&#8203;fix** and **sent&#8203;fix** files are then included with your translation request and read directly into memory at runtime.
+
+## Managed glossary workflow
+
+ > [!IMPORTANT]
+ > **UTF-16 LE** is the only accepted file format for the managed-glossary folders. For more information about encoding your files, *see* [Encoding](/powershell/module/microsoft.powershell.management/set-content?view=powershell-7.2#-encoding&preserve-view=true)
+
+1. To get started manually creating the folder structure, you need to create and name your folder. The managed-glossary folder is encoded in **UTF-16 LE BOM** format and nests **phrase&#8203;fix** or **sent&#8203;fix** source and target language files. Let's name our folder `customhotfix`. Each folder can have **phrase&#8203;fix** and **sent&#8203;fix** files. You provide the source (`src`) and target (`tgt`) language codes with the following naming convention:
+
+ |Glossary file name format|Example file name |
+ |--|--|
+ |{`src`}.{`tgt`}.{container-glossary}.{phrase&#8203;fix}.src.snt|en.es.container-glossary.phrasefix.src.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{phrase&#8203;fix}.tgt.snt|en.es.container-glossary.phrasefix.tgt.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{sent&#8203;fix}.src.snt|en.es.container-glossary.sentfix.src.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{sent&#8203;fix}.tgt.snt|en.es.container-glossary.sentfix.tgt.snt|
+
+ > [!NOTE]
+ >
+ > * The **phrase&#8203;fix** solution is an exact find-and-replace operation. Any word or phrase listed is translated in the way specified.
+ > * The **sent&#8203;fix** solution is more precise and allows you to specify an exact target translation for a source sentence. For a sentence match to occur, the entire submitted sentence must match the **sent&#8203;fix** entry. If only a portion of the sentence matches, the entry won't match.
+ > * If you're hesitant about making sweeping find-and-replace changes, we recommend, at the outset, solely using the **sent&#8203;fix** solution.
+
+1. Next, to dynamically reload glossary entry updates, create a `version.json` file within the `customhotfix` folder. The `version.json` file should contain the following parameters: **VersionId**. An integer value.
+
+ ***Sample version.json file***
+
+ ```json
+ {
+
+ "VersionId": 5
+
+ }
+
+ ```
+
+ > [!TIP]
+ >
+ > Reload can be controlled by setting the following environmental variables when starting the container:
+ >
+ > * **HotfixReloadInterval=**. Default value is 5 minutes.
+ > * **HotfixReloadEnabled=**. Default value is true.
+
+1. Use the **docker run** command
+
+ **Docker run command required options**
+
+ ```dockerfile
+ docker run --rm -it -p 5000:5000 \
+
+ -e eula=accept \
+
+ -e billing={ENDPOINT_URI} \
+
+ -e apikey={API_KEY} \
+
+ -e Languages={LANGUAGES_LIST} \
+
+ -e HotfixDataFolder={path to glossary folder}
+
+ {image}
+ ```
+
+ **Example docker run command**
+
+ ```dockerfile
+
+ docker run -rm -it -p 5000:5000 \
+ -v /mnt/d/models:/usr/local/models -v /mnt/d /customerhotfix:/usr/local/customhotfix \
+ -e EULA=accept \
+ -e billing={ENDPOINT_URI} \
+ -e apikey={API_Key} \
+ -e Languages=en,es \
+ -e HotfixDataFolder=/usr/local/customhotfix\
+ mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+
+ ```
+
+## Learn more
+
+> [!div class="nextstepaction"]
+> [Create a dynamic dictionary](../dynamic-dictionary.md) [Use a custom dictionary](../custom-translator/concepts/dictionaries.md)
ai-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md
keywords: on-premises, Docker, container, identify
# Install and run Translator containers
-Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
+Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you learn how to download, install, and run a Translator container.
Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
See the list of [languages supported](../language-support.md) when using Transla
> [!IMPORTANT] >
-> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
-> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
+> * To use the Translator container, you must submit an online request and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
+> * Translator container supports limited features compared to the cloud offerings. For more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
<!-- markdownlint-disable MD033 --> ## Prerequisites
-To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-You'll also need to have:
+You also need:
| Required | Purpose | |--|--|
-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
+| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
| Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
+| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) regional resource (not `global`) with an associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
|Optional|Purpose| ||-|
curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS
There are several ways to validate that the container is running:
-* The container provides a homepage at `\` as a visual validation that the container is running.
+* The container provides a homepage at `/` as a visual validation that the container is running.
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
| Request URL | Purpose | |--|--|
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
description: Learn how to use the Azure CLI to create and Azure Active Directory
Previously updated : 07/07/2023 Last updated : 08/15/2023 # Integrate Azure Active Directory with Azure Kubernetes Service (AKS) using the Azure CLI (legacy) > [!WARNING]
-> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from August 1st, 2023.
+> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from December 1st, 2023.
> > AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate].
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 05/17/2023 Last updated : 08/16/2023 # Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
This section provides guidance for cluster administrators who want to provision
|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.| |resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.| |storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
+|networkEndpointType| Specify network endpoint type for the storage account created by driver. If privateEndpoint is specified, a [private endpoint][storage-account-private-endpoint] is created for the storage account. For other cases, a service endpoint will be created for NFS protocol.<sup>1</sup> | `privateEndpoint` | No | For an AKS cluster, add the AKS cluster name to the Contributor role in the resource group hosting the VNET.|
|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`| |containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. | |containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
This section provides guidance for cluster administrators who want to provision
| | **Following parameters are only for NFS protocol** | | | | |mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
+<sup>1</sup> If the storage account is created by the driver, then you only need to specify `networkEndpointType: privateEndpoint` parameter in storage class. The CSI driver creates the private endpoint together with the account. If you bring your own storage account, then you need to [create the private endpoint][storage-account-private-endpoint] for the storage account.
+ ### Create a persistent volume claim using built-in storage class A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
This section provides guidance for cluster administrators who want to create one
### Create a Blob storage container
-When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
+When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource.
For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
The following YAML creates a pod that uses the persistent volume or persistent v
[az-tags]: ../azure-resource-manager/management/tag-resources.md [sas-tokens]: ../storage/common/storage-sas-overview.md [azure-datalake-storage-account]: ../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
+[storage-account-private-endpoint]: ../storage/common/storage-private-endpoints.md
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 05/17/2023 Last updated : 08/16/2023 # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
The following YAML creates a pod that uses the persistent volume claim *my-azure
metadata: name: mypod spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: /mnt/azure
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
This article requires Azure CLI version 2.0.76 or later. Run `az --version` to f
To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways:
-* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.
+* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work)
* The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. ![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
To further help improve cluster resource utilization and free up CPU and memory
[aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group [aks-multiple-node-pools]: create-node-pools.md [aks-scale-apps]: tutorial-kubernetes-scale.md
-[aks-view-master-logs]: ../azure-monitor/containers/monitor-kubernetes.md#configure-monitoring
+[aks-view-master-logs]: monitor-aks.md#resource-logs
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
For more information to help you decide which network model to use, see [Compare
--service-cidr 10.0.0.0/16 \ --dns-service-ip 10.0.0.10 \ --pod-cidr 10.244.0.0/16 \
- --docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id $SUBNET_ID ```
For more information to help you decide which network model to use, see [Compare
* This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
- * *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
> [!NOTE] > If you want to enable an AKS cluster to include a [Calico network policy][calico-network-policies], you can use the following command:
For more information to help you decide which network model to use, see [Compare
> --resource-group myResourceGroup \ > --name myAKSCluster \ > --node-count 3 \
-> --network-plugin kubenet --network-policy calico \
+> --network-plugin kubenet \
+> --network-policy calico \
> --vnet-subnet-id $SUBNET_ID > ```
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
The Azure Linux container host for AKS is an open-source Linux distribution avai
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
- --name azurelinuxpool \
+ --name azlinuxpool \
--os-sku AzureLinux ```
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
description: Learn how to deploy Kubernetes applications from Azure Marketplace
Previously updated : 05/01/2023 Last updated : 08/18/2023
Included among these solutions are Kubernetes application-based container offers
This feature is currently supported only in the following regions: -- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
+- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
Kubernetes application-based container offers can't be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
-## Register resource providers
-
-Before you deploy a container offer, you must register the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription by using the `az provider register` command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService --wait
-az provider register --namespace Microsoft.KubernetesConfiguration --wait
-```
- ## Select and deploy a Kubernetes application
-### From the AKS portal screen
+### From an AKS cluster
1. In the [Azure portal](https://portal.azure.com/), you can deploy a Kubernetes application from an existing cluster by navigating to **Marketplace** or selecting **Extensions + applications**, then selecting **+ Add**.
az provider register --namespace Microsoft.KubernetesConfiguration --wait
1. After you decide on an application, select the offer.
-1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**.
+1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**.
:::image type="content" source="./media/deploy-marketplace/plan-pricing.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, showing plan and pricing information.":::
-1. Follow each page in the wizard, all the way through Review + Create. Fill in information for your resource group, your cluster, and any configuration options that the application requires.
+1. Follow each page in the wizard, all the way through **Review + Create**. Fill in information for your resource group, your cluster, and any configuration options that the application requires.
:::image type="content" source="./media/deploy-marketplace/review-create.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a cluster or using an existing one.":::
az provider register --namespace Microsoft.KubernetesConfiguration --wait
:::image type="content" source="./media/deploy-marketplace/deploying.png" alt-text="Screenshot of the Azure portal deployments screen, showing that the Kubernetes offer is currently being deployed.":::
-### From the Marketplace portal screen
+### Search in the Azure portal
1. In the [Azure portal](https://portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**.
You can view the extension instance from the cluster by using the following comm
az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` ------- ## Monitor billing and usage information
To monitor billing and usage information for the offer that you deployed:
You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. -- ### [Portal](#tab/azure-portal) Select an application, then select the uninstall button to remove the extension from your cluster:
Select an application, then select the uninstall button to remove the extension
az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` - ## Troubleshooting
If you experience issues, see the [troubleshooting checklist for failed deployme
## Next steps - Learn more about [exploring and analyzing costs][billing].
+- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)
<!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview- [cluster-extensions]: ./cluster-extensions.md- [billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md-
-[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer
------- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)----
+[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[azure-monitor-overview]: ../azure-monitor/overview.md [container-insights]: ../azure-monitor/containers/container-insights-overview.md [azure-monitor-managed-prometheus]: ../azure-monitor/essentials/prometheus-metrics-overview.md
-[collect-control-plane-logs]: monitor-aks.md#collect-control-plane-logs
+[collect-resource-logs]: monitor-aks.md#resource-logs
[azure-monitor-logs]: ../azure-monitor/logs/data-platform-logs.md [helm]: quickstart-helm.md [aks-best-practices]: best-practices.md
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
spec:
This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster are unable to access the load balancer. > [!NOTE]
-> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> * Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> * Pod CIDR should be added to loadBalancerSourceRanges if there are Pods needing to access the service's LoadBalancer IP for clusters with version v1.25 or above.
## Maintain the client's IP on inbound connections
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
The following table lists [dimensions](../azure-monitor/essentials/data-platform
## Resource logs
-AKS implements control plane logs for the cluster as [resource logs in Azure Monitor](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for query examples.
+AKS implements control plane logs for the cluster as [resource logs in Azure Monitor.](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [Sample queries](monitor-aks-reference.md#resource-logs) for query examples.
The following table lists the resource log categories you can collect for AKS. All logs are written to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview)
+ Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster
+ description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS).
-# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview)
+# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster
+
+Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice thereby minimizing any workload impact.
-Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows to perform updates and minimize workload impact. Once scheduled, upgrades occur only during the window you selected.
+AKS intiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade].
There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`: -- `default` corresponds to a basic configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].
+- `default` corresponds to a basic configuration that is used to control AKS releases, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). Choose `default` to schedule these updates in such a way that it's least disruptive for you. You can monitor the status of an ongoing AKS release by region from the [weekly releases tracker][release-tracker].
- `aksManagedAutoUpgradeSchedule` controls when cluster upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default` configuration. For more information on cluster auto-upgrade, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade]. -- `aksManagedNodeOSUpgradeSchedule` controls when node operating system upgrades scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
+- `aksManagedNodeOSUpgradeSchedule` controls when the node operating system security patching scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade channel, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
-We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node image upgrade scenarios, while `default` is meant exclusively for weekly releases. You can port `default` configurations to `aksManagedAutoUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
-
-To configure Planned Maintenance using pre-created configurations, see [Use Planned Maintenance pre-created configurations to schedule AKS weekly releases][pm-weekly].
+We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios, while `default` is meant exclusively for the AKS weekly releases. You can port `default` configurations to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
## Before you begin
This article assumes that you have an existing AKS cluster. If you need an AKS c
Be sure to upgrade Azure CLI to the latest version using [`az upgrade`](/cli/azure/update-azure-cli#manual-update). -
-### Limitations
-
-When you use Planned Maintenance, the following restrictions apply:
--- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.-- Currently, performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.-- Updates can't be blocked for more than seven days.-
-### Install aks-preview CLI extension
-
-You also need the *aks-preview* Azure CLI extension version 0.5.124 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
- ## Creating a maintenance window
-To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name will cause your maintenance window not to run.
+To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name causes your maintenance window not to run.
> [!NOTE] > When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more.
A `RelativeMonthly` schedule may look like *"every two months, on the last Monda
Valid values for `weekIndex` are `First`, `Second`, `Third`, `Fourth`, and `Last`.
+### Things to note
+
+When you use Planned Maintenance, the following restrictions apply:
+
+- AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical. These maintenance operations may even run during the `notAllowedTime` or `notAllowedDates` periods defined in your configuration.
+- Performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.
+ ## Add a maintenance window configuration with Azure CLI The following example shows a command to add a new `default` configuration that schedules maintenance to run from 1:00am to 2:00am every Monday:
To delete a certain maintenance configuration window in your AKS Cluster, use th
```azurecli-interactive az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule ```
+## Frequently Asked Questions
+
+* How can I check the existing maintenance configurations in my cluster?
+
+ Use the `az aks maintenanceconfiguration show` command.
+
+* Can reactive, unplanned maintenance happen during the `notAllowedTime` or `notAllowedDates` periods too?
+
+ Yes, AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical.
+
+* How can you tell if a maintenance event occurred?
+
+ For releases, check your cluster's region and look up release information in [weekly releases][release-tracker] and validate if it matches your maintenance schedule or not. To view the status of your auto upgrades, look up [activity logs][monitor-aks] on your cluster. You may also look up specific upgrade related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid].
+
+* Can you use more than one maintenance configuration at the same time?
+
+ Yes, you can run all three configurations i.e `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`simultaneously. In case the windows overlap AKS decides the running order.
+
+* Are there any best practices for the maintenance configurations?
+
+ We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy].
## Next steps
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
[auto-upgrade]: auto-upgrade-cluster.md [node-image-auto-upgrade]: auto-upgrade-node-image.md [pm-weekly]: ./aks-planned-maintenance-weekly-releases.md
+[monitor-aks]: monitor-aks-reference.md
+[aks-eventgrid]:quickstart-event-grid.md
+[aks-support-policy]:support-policies.md
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
If you need more troubleshooting data, you can [view the Kubernetes primary node
[install-azure-cli]: /cli/azure/install-azure-cli [install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md
-[view-primary-logs]: ../azure-monitor/containers/container-insights-log-query.md#resource-logs
+[view-primary-logs]: monitor-aks.md#resource-logs
[azure-bastion]: ../bastion/bastion-overview.md
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 04/28/2023 Last updated : 08/15/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv
> Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >
-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2023.
+> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024.
> > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`.
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
description: Learn how to control pod admissions using PodSecurityPolicy in Azur
Last updated 08/01/2023+ # Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview)
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities on Azure Kubernetes Service (AKS)
description: Learn about Azure Active Directory workload identity for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 05/23/2023 Last updated : 08/18/2023 # Use Azure AD workload identity with Azure Kubernetes Service (AKS)
This article helps you understand this new authentication feature, and reviews t
In the Azure Identity client libraries, choose one of the following approaches: -- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.
+- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`. &dagger;
- Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`. - Use `WorkloadIdentityCredential` directly. The following table provides the **minimum** package version required for each language's client library.
-| Language | Library | Minimum Version | Example |
-||-|--||
-| .NET | [Azure.Identity](/dotnet/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
-| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
-| Java | [azure-identity](/java/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
-| JavaScript | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
-| Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
+| Language | Library | Minimum Version | Example |
+|||--|--|
+| .NET | [Azure.Identity](/dotnet/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) |
+| C++ | [azure-identity-cpp](https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/README.md) | 1.6.0-beta.1 | [Link](https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/samples/workload_identity_credential.cpp) |
+| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) |
+| Java | [azure-identity](/java/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) |
+| JavaScript | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) |
+| Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) |
+
+&dagger; In the C++ library, `WorkloadIdentityCredential` isn't part of the `DefaultAzureCredential` authentication flow.
## Microsoft Authentication Library (MSAL)
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
For more information about the information assets and capabilities in API Center
## Preview limitations * In preview, API Center is available in the following Azure regions:-
- * East US
- * UK South
- * Central India
- * Australia East
-
+ * Australia East
+ * Central India
+ * East US
+ * UK South
+ * West Europe
+
## Frequently asked questions ### Q: Is API Center part of Azure API Management?
A: Yes, all data in API Center is encrypted at rest.
> [!div class="nextstepaction"] > [Provide feedback](https://aka.ms/apicenter/preview/feedback)+
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
To restore routing to the regional gateway, set the value of `disableGateway` to
This section provides considerations for multi-region deployments when the API Management instance is injected in a virtual network.
-* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are the same as those for a network in the primary region.
+* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are generally the same as those for a network in the primary region.
* Virtual networks in the different regions don't need to be peered.
+> [!IMPORTANT]
+> When configured in internal VNet mode, each regional gateway must also have outbound connectivity on port 1443 to the Azure SQL database configured for your API Management instance, which is only in the *primary* region. Ensure that you allow connectivity to the FQDN or IP address of this Azure SQL database in any routes or firewall rules you configure for networks in your secondary regions; the Azure SQL service tag can't be used in this scenario. To find the Azure SQL database name in the primary region, go to the **Network** > **Network status** page of your API Management instance in the portal.
### IP addresses
This section provides considerations for multi-region deployments when the API M
[create an api management service instance]: get-started-create-service-instance.md+ [get started with azure api management]: get-started-create-service-instance.md+ [deploy an api management service instance to a new region]: #add-region+ [delete an api management service instance from a region]: #remove-region+ [unit]: https://azure.microsoft.com/pricing/details/api-management/+ [premium]: https://azure.microsoft.com/pricing/details/api-management/++
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
### Usage notes
+- API Management only performs cache lookup for HTTP GET requests.
* When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend. - This policy can only be used once in a policy section.
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
### Usage notes
+- API Management only caches responses to HTTP GET requests.
- This policy can only be used once in a policy section.
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
The managed developer portal includes a **Custom HTML code** widget where you ca
## Create and upload custom widget
-### Prerequisites
-
+For more advanced widget use cases, API Management provides a scaffold and tools to help developers create a widget and upload it to the developer portal.
+
+### Prerequisites
+ * Install [Node.JS runtime](https://nodejs.org/en/) locally * Basic knowledge of programming and web development ### Create widget
+> [!WARNING]
+> Your custom widget code is stored in public Azure blob storage that's associated with your API Management instance. When you add a custom widget to the developer portal, code is read from this storage via an endpoint that doesn't require authentication, even if the developer portal or a page with the custom widget is only accessible to authenticated users. Don't include sensitive information or secrets in the custom widget code.
+>
+ 1. In the administrative interface for the developer portal, select **Custom widgets** > **Create new custom widget**. 1. Enter a widget name and choose a **Technology**. For more information, see [Widget templates](#widget-templates), later in this article. 1. Select **Create widget**.
The managed developer portal includes a **Custom HTML code** widget where you ca
If prompted, sign in to your Azure account. - The custom widget is now deployed to your developer portal. Using the portal's administrative interface, you can add it on pages in the developer portal and set values for any custom properties configured in the widget. ### Publish the developer portal
The React template contains prepared custom hooks in the `hooks.ts` file and est
This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools) contains the following functions to help you develop your custom widget and provides features including communication between the developer portal and your widget: + |Function |Description | ||| |[getValues](#azureapi-management-custom-widgets-toolsgetvalues) | Returns a JSON object containing values set in the widget editor combined with default values |
This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-wi
|[getWidgetData](#azureapi-management-custom-widgets-toolsgetwidgetdata) | Returns all data passed to your custom widget from the developer portal<br/><br/>Used internally in templates | + #### `@azure/api-management-custom-widgets-tools/getValues` Function that returns a JSON object containing the values you've set in the widget editor combined with default values, passed as an argument.
This function returns a JavaScript promise, which after resolution returns a JSO
> Manage and use the token carefully. Anyone who has it can access data in your API Management service. + #### `@azure/api-management-custom-widgets-tools/deployNodeJs` This function deploys your widget to your blob storage. In all templates, it's preconfigured in the `deploy.js` file.
To implement your widget using another JavaScript UI framework and libraries, yo
* For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running. - ## Next steps Learn more about the developer portal:
Learn more about the developer portal:
- [Frequently asked questions](developer-portal-faq.md) - [Scaffolder of a custom widget for developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-scaffolder) - [Tools for working with custom widgets of developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools)+
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
Previously updated : 12/08/2022 Last updated : 08/02/2023
The `send-one-way-request` policy sends the provided request to the specified UR
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
| timeout| The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 |
The `send-one-way-request` policy sends the provided request to the specified UR
| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
+| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No |
## Usage
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
Previously updated : 12/08/2022 Last updated : 08/02/2023
The `send-request` policy sends the provided request to the specified URL, waiti
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. Policy expressions are allowed. | Yes | N/A | | timeout | The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 | | ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. Policy expressions aren't allowed. | No | `false` |
The `send-request` policy sends the provided request to the specified URL, waiti
| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
-| proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No |
+| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No |
## Usage
app-service App Service Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-best-practices.md
Title: Best Practices description: Learn best practices and the common troubleshooting scenarios for your app running in Azure App Service.- ms.assetid: f3359464-fa44-4f4a-9ea6-7821060e8d0d Last updated 07/01/2016-++ # Best Practices for Azure App Service
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md
Last updated 07/31/2023 -+
app-service App Service Web App Cloning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-app-cloning.md
ms.assetid: f9a5cfa1-fbb0-41e6-95d1-75d457347a35
Last updated 01/14/2016 -++ # Azure App Service App Cloning Using PowerShell
app-service App Service Web Configure Tls Mutual Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md
Title: Configure TLS mutual authentication description: Learn how to authenticated client certificates on TLS. Azure App Service can make the client certificate available to the app code for verification. ++ ms.assetid: cd1d15d3-2d9e-4502-9f11-a306dac4453a Last updated 12/11/2020
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
Last updated 01/31/2023 + # Map an existing custom DNS name to Azure App Service
app-service App Service Web Tutorial Dotnet Sqldatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md
ms.assetid: 03c584f1-a93c-4e3d-ac1b-c82b50c75d3e
ms.devlang: csharp Last updated 01/27/2022-+ # Tutorial: Deploy an ASP.NET app to Azure with Azure SQL Database
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
ms.devlang: csharp
Last updated 01/31/2023 + # Tutorial: Host a RESTful API with CORS in Azure App Service
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md
Title: Manage AuthN/AuthZ API versions
description: Upgrade your App Service authentication API to V2 or pin it to a specific version, if needed. Last updated 02/17/2023-+ # Manage the API and runtime versions of App Service authentication
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md
Title: Customize sign-ins and sign-outs
description: Use the built-in authentication and authorization in App Service and at the same time customize the sign-in and sign-out behavior. Last updated 03/29/2021+ # Customize sign-in and sign-out in Azure App Service authentication
app-service Configure Authentication File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-file-based.md
Title: File-based configuration of AuthN/AuthZ
description: Configure authentication and authorization in App Service using a configuration file to enable certain preview capabilities. Last updated 07/15/2021+ # File-based configuration in Azure App Service authentication
app-service Configure Authentication Oauth Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-oauth-tokens.md
Title: OAuth tokens in AuthN/AuthZ
description: Learn how to retrieve tokens and refresh tokens and extend sessions when using the built-in authentication and authorization in App Service. Last updated 03/29/2021+ # Work with OAuth tokens in Azure App Service authentication
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
description: Learn how to configure Azure Active Directory authentication as an
ms.assetid: 6ec6a46c-bce4-47aa-b8a3-e133baef22eb Last updated 01/31/2023-+ # Configure your App Service or Azure Functions app to use Azure AD login
app-service Configure Authentication Provider Apple https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-apple.md
description: Learn how to configure Sign in with Apple as an identity provider f
Last updated 11/19/2020 + # Configure your App Service or Azure Functions app to sign in using a Sign in with Apple provider (Preview)
app-service Configure Authentication Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-facebook.md
description: Learn how to configure Facebook authentication as an identity provi
ms.assetid: b6b4f062-fcb4-47b3-b75a-ec4cb51a62fd Last updated 03/29/2021-+ # Configure your App Service or Azure Functions app to use Facebook login
app-service Configure Authentication Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-github.md
Title: Configure GitHub authentication
description: Learn how to configure GitHub authentication as an identity provider for your App Service or Azure Functions app. Last updated 03/01/2022+ # Configure your App Service or Azure Functions app to use GitHub login
app-service Configure Authentication Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-google.md
description: Learn how to configure Google authentication as an identity provide
ms.assetid: 2b2f9abf-9120-4aac-ac5b-4a268d9b6e2b Last updated 03/29/2021-+
app-service Configure Authentication Provider Microsoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-microsoft.md
description: Learn how to configure Microsoft Account authentication as an ident
ms.assetid: ffbc6064-edf6-474d-971c-695598fd08bf Last updated 03/29/2021-+
app-service Configure Authentication Provider Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-openid-connect.md
description: Learn how to configure an OpenID Connect provider as an identity pr
Last updated 10/20/2021 + # Configure your App Service or Azure Functions app to login using an OpenID Connect provider
app-service Configure Authentication Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-twitter.md
description: Learn how to configure Twitter authentication as an identity provid
ms.assetid: c6dc91d7-30f6-448c-9f2d-8e91104cde73 Last updated 03/29/2021-+ # Configure your App Service or Azure Functions app to use Twitter login
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-user-identities.md
Title: User identities in AuthN/AuthZ
description: Learn how to access user identities when using the built-in authentication and authorization in App Service. Last updated 03/29/2021+ # Work with user identities in Azure App Service authentication
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
keywords: azure app service, web app, app settings, environment variables
ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 Last updated 04/21/2023-+ ms.devlang: azurecli # Configure an App Service app
This article explains how to configure common settings for web apps, mobile back
## Configure app settings
+> [!NOTE]
+> - App settings names can only contain letters, numbers (0-9), periods ("."), and underscores ("_")
+> - Special characters in the value of an App Setting must be escaped as needed by the target OS
+>
+> For example to set an environment variable in App Service Linux with the value `"pa$$w0rd\"` the string for the app setting should be: `"pa\$\$w0rd\\"`
+ In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart. For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in *Web.config* or *appsettings.json*, but the values in App Service override the ones in *Web.config* or *appsettings.json*. You can keep development settings (for example, local MySQL password) in *Web.config* or *appsettings.json* and production secrets (for example, Azure MySQL database password) safely in App Service. The same code uses your development settings when you debug locally, and it uses your production secrets when deployed to Azure.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
Title: Configure a custom container description: Learn how to configure a custom container in Azure App Service. This article shows the most common configuration tasks. -++ Last updated 01/04/2023
app-service Configure Domain Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-domain-traffic-manager.md
ms.assetid: 0f96c0e7-0901-489b-a95a-e3b66ca0a1c2
Last updated 03/05/2020 ++ # Configure a custom domain name in Azure App Service with Traffic Manager integration
After the records for your domain name have propagated, use the browser to verif
## Next steps > [!div class="nextstepaction"]
-> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md
ms.devlang: javascript, devx-track-azurecli Last updated 01/21/2022++ zone_pivot_groups: app-service-platform-windows-linux
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
description: Learn how to configure a PHP app in a pre-built PHP container, in A
ms.devlang: php Previously updated : 05/09/2023 Last updated : 08/31/2023 zone_pivot_groups: app-service-platform-windows-linux+
For more information on how App Service runs and builds PHP apps in Linux, see [
## Customize start-up
-By default, the built-in PHP container runs the Apache server. At start-up, it runs `apache2ctl -D FOREGROUND"`. If you like, you can run a different command at start-up, by running the following command in the [Cloud Shell](https://shell.azure.com):
+If you want, you can run a custom command at the container start-up time, by running the following command in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>"
By default, Azure App Service points the root virtual application path (*/*) to
The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the `public/` subdirectory as the site root.
-The default PHP image for App Service uses Apache, and it doesn't let you customize the site root for your app. To work around this limitation, add an *.htaccess* file to your repository root with the following content:
+The default PHP image for App Service uses Nginx, and you change the site root by [configuring the Nginx server with the `root` directive](https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/). This [example configuration file](https://github.com/Azure-Samples/laravel-tasks/blob/main/default) contains the following snippets that changes the `root` directive:
```
-<IfModule mod_rewrite.c>
- RewriteEngine on
- RewriteCond %{REQUEST_URI} ^(.*)
- RewriteRule ^(.*)$ /public/$1 [NC,L,QSA]
-</IfModule>
+server {
+ #proxy_cache cache;
+ #proxy_cache_valid 200 1s;
+ listen 8080;
+ listen [::]:8080;
+ root /home/site/wwwroot/public; # Changed for Laravel
+
+ location / {
+ index index.php https://docsupdatetracker.net/index.html index.htm hostingstart.html;
+ try_files $uri $uri/ /index.php?$args; # Changed for Laravel
+ }
+ ...
+```
+
+The default container uses the configuration file found at */etc/nginx/sites-available/default*. Keep in mind that any edit you make to this file is erased when the app restarts. To make a change that is effective across app restarts, [add a custom start-up command](#customize-start-up) like this example:
+
+```
+cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload
```
-If you would rather not use *.htaccess* rewrite, you can deploy your Laravel application with a [custom Docker image](quickstart-custom-container.md) instead.
+This command replaces the default Nginx configuration file with a file named *default* in your repository root and restarts Nginx.
::: zone-end
Then, go to the Azure portal and add an Application Setting to scan the "ini" di
::: zone pivot="platform-windows"
-To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), you can't use the *.htaccess* approach. App Service provides a separate mechanism using the `PHP_INI_SCAN_DIR` app setting.
+To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), use the `PHP_INI_SCAN_DIR` app setting.
First, run the following command in the [Cloud Shell](https://shell.azure.com) to add an app setting called `PHP_INI_SCAN_DIR`:
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
Title: Configure Linux Python apps
description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Last updated 11/16/2022-++ ms.devlang: python adobe-target: true
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
Last updated 04/20/2023 + # Secure a custom DNS name with a TLS/SSL binding in Azure App Service
In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>:
1. In **TLS/SSL type**, choose between **SNI SSL** and **IP based SSL**. - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings may be added. This option allows multiple TLS/SSL certificates to secure multiple domains on the same IP address. Most modern browsers (including Internet Explorer, Chrome, Firefox, and Opera) support SNI (for more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication)).
-
+
1. When adding a new certificate, validate the new certificate by selecting **Validate**.
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
Last updated 07/28/2023 + # Add and manage TLS/SSL certificates in Azure App Service
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
For more information, see [Kudu documentation](https://github.com/projectkudu/ku
You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API.
-The deployment process places the package on the shared file drive correctly (see [Kudu publish API reference](#kudu-publish-api-reference)). For that reason, deploying WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy is not recommended.
+The deployment process used by the steps here places the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you may see unkown failures due to mistakes in the naming or structure.
# [Azure CLI](#tab/cli)
app-service Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md
Last updated 08/10/2023+ # Authentication scenarios and recommendations
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
description: Learn how to restore backups of your apps in Azure App Service or c
ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Last updated 04/25/2023++ # Back up and restore your app in Azure App Service
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
Title: 'Set up Azure Arc for App Service, Functions, and Logic Apps' description: For your Azure Arc-enabled Kubernetes clusters, learn how to enable App Service apps, function apps, and logic apps.++ Last updated 03/24/2023
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff
Last updated 01/31/2023 + # Buy an App Service domain and configure an app with it
app-service Manage Custom Dns Migrate Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-migrate-domain.md
Title: Migrate an active DNS name description: Learn how to migrate a custom DNS domain name that is already assigned to a live site to Azure App Service without any downtime. tags: top-support-issue-++ ms.assetid: 10da5b8a-1823-41a3-a2ff-a0717c2b5c2d Last updated 01/31/2023
app-service Manage Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md
Title: Move an app to another region description: Learn how to move App Service resources from one region to another.-++ Last updated 02/27/2020
Delete the source app and App Service plan. [An App Service plan in the non-free
## Next steps
-[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md)
+[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md)
app-service Operating System Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md
Title: Operating system functionality description: Learn about the OS functionality in Azure App Service on Windows. Find out what types of file, network, and registry access your app gets. -++ ms.assetid: 39d5514f-0139-453a-b52e-4a1c06d8d914 Last updated 01/21/2022
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc'
description: An introduction to App Service integration with Azure Arc for Azure operators. Last updated 03/15/2023++ # App Service, Functions, and Logic Apps on Azure Arc (Preview)
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5
Last updated 02/03/2023 -+ # Authentication and authorization in Azure App Service and Azure Functions
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
description: Learn how you can troubleshoot issues with your app in Azure App Se
keywords: app service, azure app service, diagnostics, support, web app, troubleshooting, self-help Previously updated : 06/29/2013 Last updated : 06/29/2023
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md
Title: Plan to manage costs for App Service description: Learn how to plan for and manage costs for Azure App Service by using cost analysis in the Azure portal.++
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
description: Learn how managed identities work in Azure App Service and Azure Fu
Last updated 06/27/2023 -+ # How to use managed identities for App Service and Azure Functions
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3
Last updated 07/19/2023 + # App Service overview
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
Last updated 06/30/2022 ms.devlang: azurecli++ # Create an App Service app on Azure Arc (Preview)
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure'
description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Last updated 07/26/2023--+ ms.devlang: python
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
The following environment variables are related to the [push notifications](/pre
| `WEBSITE_PUSH_TAGS_DYNAMIC` | Read-only. Contains a list of tags in the notification registration that were added automatically. | >[!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
<!-- ## WellKnownAppSettings
app-service Resources Kudu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md
Title: Kudu service overview description: Learn about the engine that powers continuous deployment in App Service and its features.++ Last updated 03/17/2021
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
Last updated 04/05/2023
ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
Last updated 06/28/2023
ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user.
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Last updated 07/31/2023
ms.devlang: csharp, azurecli-+ #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Scenario Secure App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md
Last updated 06/25/2023 -+ #Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.
app-service Scenario Secure App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-overview.md
Last updated 12/10/2021 -+ #Customer intent: As an application developer, I want to learn how to secure access to a web app running on Azure App Service.
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
keywords: app service, azure app service, authN, authZ, secure, security, multi-
ms.devlang: csharp Last updated 3/08/2023-+ zone_pivot_groups: app-service-platform-windows-linux # Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
Last updated 03/14/2023
ms.devlang: javascript-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
app-service Tutorial Connect App Access Microsoft Graph As User Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-user-javascript.md
Last updated 03/08/2022
ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user.
app-service Tutorial Connect App Access Sql Database As User Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md
ms.devlang: csharp-+ Last updated 04/21/2023
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
Last updated 07/31/2023
ms.devlang: javascript, azurecli-+ #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Tutorial Connect App App Graph Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md
keywords: app service, azure app service, authN, authZ, secure, security, multi-
ms.devlang: javascript Last updated 3/13/2023-+ zone_pivot_groups: app-service-platform-windows-linux # Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
keywords: azure app service, web app, security, msi, managed service identity, m
ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity
app-service Tutorial Connect Msi Key Vault Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md
Last updated 10/26/2021 -+ # Tutorial: Secure Cognitive Service connection from JavaScript App Service using Key Vault
app-service Tutorial Connect Msi Key Vault Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-php.md
Last updated 10/26/2021 -+ # Tutorial: Secure Cognitive Service connection from PHP App Service using Key Vault
app-service Tutorial Connect Msi Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md
Last updated 10/26/2021 -+ # Tutorial: Secure Cognitive Service connection from .NET App Service using Key Vault
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
description: Secure Azure SQL Database connectivity with managed identity from a
ms.devlang: csharp Last updated 04/01/2023-+ # Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity
You're now ready to develop and debug your app with the SQL Database as the back
> It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration).
-The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core).
+The steps you follow for your project depends on whether you're using [Entity Framework Core](/ef/core/) (default for ASP.NET Core) or [Entity Framework](/ef/ef6/) (default for ASP.NET).
+
+# [Entity Framework Core](#tab/efcore)
+
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
+
+ ```powershell
+ Install-Package Microsoft.Data.SqlClient -Version 5.1.0
+ ```
+
+1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
+
+ ```json
+ "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
+ ```
+
+ > [!NOTE]
+ > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
+ >
+
+ That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+
+1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
# [Entity Framework](#tab/ef)
The steps you follow for your project depends on whether you're using [Entity Fr
1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
-# [Entity Framework Core](#tab/efcore)
-
-1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
-
- ```powershell
- Install-Package Microsoft.Data.SqlClient -Version 5.1.0
- ```
-
-1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
-
- ```json
- "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
- ```
-
- > [!NOTE]
- > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
- >
-
- That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
-
-1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
- -- ## 4. Use managed identity connectivity
app-service Tutorial Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md
description: Your app service may need to connect to other Azure services such a
Last updated 02/16/2022+ # Securely connect to Azure services and databases from Azure App Service
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
ms.devlang: csharp -+ # Tutorial: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
ms.devlang: java Last updated 5/27/2022-+ # Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
ms.devlang: java Last updated 12/10/2018-+ # Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
Last updated 08/14/2023 -+ # Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Last updated 09/06/2022
ms.role: developer ms.devlang: javascript-++ # Deploy a Node.js + MongoDB web app to Azure
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
Title: 'Tutorial: PHP app with MySQL and Redis' description: Learn how to get a PHP app working in Azure, with connection to a MySQL database and a Redis cache in Azure. Laravel is used in the tutorial.-++ ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Last updated 06/30/2023-+ # Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
description: Create a Python Django or Flask web app with a PostgreSQL database
ms.devlang: python Last updated 02/28/2023-
-zone_pivot_groups: deploy-python-web-app-postgresql
++ # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/
* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). * Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) - ## Sample application Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
If you can't connect to the SSH session, then the app itself has failed to start
If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database. --
-## Provision and deploy using the Azure Developer CLI
-
-Sample Python application templates using the Flask and Django framework are provided for this tutorial. The [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) greatly streamlines the process of provisioning application resources and deploying code on Azure. For a more step-by-step approach using the Azure portal and other tools, toggle to the **Azure portal** approach at the top of the page.
-
-The Azure Developer CLI (azd) provides end-to-end support for project initialization, provisioning, deploying, monitoring and scaffolding a CI/CD pipeline to run against real Azure resources. You can use `azd` to provision and deploy the resources for the sample application in an automated and streamlined way.
-
-Follow the steps below to setup the Azure Developer CLI and provision and deploy the sample application:
-
-1. Install the Azure Developer CLI. For a full list of supported installation options and tools, visit the [installation guide](/azure/developer/azure-developer-cli/install-azd).
-
- ### [Windows](#tab/windows)
-
- ```azdeveloper
- powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression"
- ```
-
- ### [macOS/Linux](#tab/mac-linux)
-
- ```azdeveloper
- curl -fsSL https://aka.ms/install-azd.sh | bash
- ```
-
-
-
-1. Run the `azd init` command to initialize the `azd` app template. Include the `--template` parameter to specify the name of an existing `azd` template you wish to use. More information about working with templates is available on the [choose an `azd` template](/azure/developer/azure-developer-cli/azd-templates) page.
-
- ### [Flask](#tab/flask)
-
- For this tutorial, Flask users should specify the [Python (Flask) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app.git) template.
-
- ```bash
- azd init --template msdocs-flask-postgresql-sample-app
- ```
-
- ### [Django](#tab/django)
-
- For this tutorial, Django users should specify the [Python (Django) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git) template.
-
- ```bash
- azd init --template msdocs-django-postgresql-sample-app
- ```
-
-1. Run the `azd auth login` command to sign-in to Azure.
-
- ```bash
- azd auth login
- ```
-
-1. Run the `azd up` command to provision the necessary Azure resources and deploy the app code. The `azd up` command will also prompt you to select the desired subscription and location to deploy to.
-
- ```bash
- azd up
- ```
-
-1. When the `azd up` command finishes running, the URL for your deployed web app in the console will be printed. Click, or copy and paste the web app URL into your browser to explore the running app and verify that it is working correctly. All of the Azure resources and application code were set up for you by the `azd up` command.
-
- The name of the resource group that was created is also displayed in the console output. Locate the resource group in the Azure portal to see all of the provisioned resources.
-
- :::image type="content" border="False" source="./media/tutorial-python-postgresql-app/azd-resources-small.png" lightbox="./media/tutorial-python-postgresql-app/azd-resources.png" alt-text="A screenshot showing the resources deployed by the Azure Developer CLI.":::
-
-The Azure Developer CLI also enables you to configure your application to use a CI/CD pipeline for deployments, setup monitoring functionality, and even remove the provisioned resources if you want to tear everything down. For more information about these additional workflows, visit the project [README](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md).
-
-## Explore the completed azd project template workflow
-
-The sections ahead review the steps that `azd` handled for you in more depth. You can explore this workflow to better understand the requirements for deploying your own apps to Azure. When you ran `azd up`, the Azure Developer CLI completed the following steps:
-
-> [!NOTE]
-> You can also use the steps outlined in the **Azure portal** version of this flow to gain additional insights into the tasks that `azd` completed for you.
-
-### 1. Cloned and initialized the project
-
-The `azd init` command cloned the sample app project template to your machine. The project template includes the following components:
-
-* **Source code**: The code and assets for a Flask or Django web app that can be used for local development or deployed to Azure.
-* **Bicep files**: Infrastructure as code (IaC) files that are used by `azd` to create the necessary resources in Azure.
-* **Configuration files**: Essential configuration files such as `azure.yaml` that are used by `azd` to provision, deploy and wire resources together to produce a fully fledged application.
-
-### 2. Provisioned the Azure resources
-
-The `azd up` command created all of the resources for the sample application in Azure using the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project template. [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep) is a declarative language used to manage Infrastructure as Code in Azure. Some of the key resources and configurations created by the template include:
-
-* **Resource group**: A resource group was created to hold all of the other provisioned Azure resources. The resource group keeps your resources well organized and easier to manage. The name of the resource group is based off of the environment name you specified during the `azd up` initialization process.
-* **Azure Virtual Network**: A virtual network was created to enable the provisioned resources to securely connect and communicate with one another. Related configurations such as setting up a private DNS zone link were also applied.
-* **Azure App Service plan**: An App Service plan was created to host App Service instances. App Service plans define what compute resources are available for one or more web apps.
-* **Azure App Service**: An App Service instance was created in the new App Service plan to host and run the deployed application. In this case a Linux instance was created and configured to run Python apps. Additional configurations were also applied to the app service, such as setting the Postgres connection string and secret keys.
-* **Azure Database for PostgreSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured.
-* **Azure Application Insights**: Application insights was set up and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application.
-
-You can inspect the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project to understand how each of these resources were provisioned in more detail. The `resources.bicep` file defines most of the different services created in Azure. For example, the App Service plan and App Service web app instance were created and connected using the following Bicep code:
-
-### [Flask](#tab/flask)
-
-```yaml
-resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = {
- name: '${prefix}-service-plan'
- location: location
- tags: tags
- sku: {
- name: 'B1'
- }
- properties: {
- reserved: true
- }
-}
-
-resource web 'Microsoft.Web/sites@2022-03-01' = {
- name: '${prefix}-app-service'
- location: location
- tags: union(tags, { 'azd-service-name': 'web' })
- kind: 'app,linux'
- properties: {
- serverFarmId: appServicePlan.id
- siteConfig: {
- alwaysOn: true
- linuxFxVersion: 'PYTHON|3.10'
- ftpsState: 'Disabled'
- appCommandLine: 'startup.sh'
- }
- httpsOnly: true
- }
- identity: {
- type: 'SystemAssigned'
- }
-```
-
-### [Django](#tab/django)
-
-```yml
-resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = {
- name: '${prefix}-service-plan'
- location: location
- tags: tags
- sku: {
- name: 'B1'
- }
- properties: {
- reserved: true
- }
-}
-
-resource web 'Microsoft.Web/sites@2022-03-01' = {
- name: '${prefix}-app-service'
- location: location
- tags: union(tags, { 'azd-service-name': 'web' })
- kind: 'app,linux'
- properties: {
- serverFarmId: appServicePlan.id
- siteConfig: {
- alwaysOn: true
- linuxFxVersion: 'PYTHON|3.10'
- ftpsState: 'Disabled'
- appCommandLine: 'startup.sh'
- }
- httpsOnly: true
- }
- identity: {
- type: 'SystemAssigned'
- }
-
-```
---
-The Azure Database for PostgreSQL was also created using the following Bicep:
-
-```yml
-resource postgresServer 'Microsoft.DBforPostgreSQL/flexibleServers@2022-01-20-preview' = {
- location: location
- tags: tags
- name: pgServerName
- sku: {
- name: 'Standard_B1ms'
- tier: 'Burstable'
- }
- properties: {
- version: '12'
- administratorLogin: 'postgresadmin'
- administratorLoginPassword: databasePassword
- storage: {
- storageSizeGB: 128
- }
- backup: {
- backupRetentionDays: 7
- geoRedundantBackup: 'Disabled'
- }
- network: {
- delegatedSubnetResourceId: virtualNetwork::databaseSubnet.id
- privateDnsZoneArmResourceId: privateDnsZone.id
- }
- highAvailability: {
- mode: 'Disabled'
- }
- maintenanceWindow: {
- customWindow: 'Disabled'
- dayOfWeek: 0
- startHour: 0
- startMinute: 0
- }
- }
-
- dependsOn: [
- privateDnsZoneLink
- ]
-}
-```
-
-### 3. Deployed the application
-
-The `azd up` command also deployed the sample application code to the provisioned Azure resources. The Developer CLI understands how to deploy different parts of your application code to different services in Azure using the `azure.yaml` file at the root of the project. The `azure.yaml` file specifies the app source code location, the type of app, and the Azure Service that should host that app.
-
-Consider the following `azure.yaml` file. These configurations tell the Azure Developer CLI that the Python code that lives at the root of the project should be deployed to the created App Service.
-
-### [Flask](#tab/flask)
-
-```yml
-name: flask-postgresql-sample-app
-metadata:
- template: flask-postgresql-sample-app@0.0.1-beta
-
- web:
- project: .
- language: py
- host: appservice
-```
-
-### [Django](#tab/django)
-
-```yml
-name: django-postgresql-sample-app
-metadata:
- template: django-postgresql-sample-app@0.0.1-beta
-
- web:
- project: .
- language: py
- host: appservice
-```
---
-## Remove the resources
-
-Once you are finished experimenting with your sample application, you can run the `azd down` command to remove the app from Azure. Removing resources helps to avoid unintended costs or unused services in your Azure subscription.
-
-```bash
-azd down
-```
-- ## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost)
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-ruby-postgres-app.md
description: Learn how to get a Linux Ruby app working in Azure App Service, wit
ms.devlang: ruby Last updated 06/18/2020-+ # Build a Ruby and Postgres app in Azure App Service on Linux
app-service Tutorial Secure Domain Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-domain-certificate.md
Title: 'Tutorial: Secure app with a custom domain and certificate' description: Learn how to secure your brand with App Service using a custom domain and enabling App Service managed certificate. ++ Last updated 01/31/2023
See [Add a private certificate to your app](configure-ssl-certificate.md) and [S
- [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md) - [Purchase an App Service domain](manage-custom-dns-buy-domain.md) - [Add a private certificate to your app](configure-ssl-certificate.md)-- [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+- [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md
description: Learn how to invoke business processes from your App Service app. S
Last updated 04/08/2020 ms.devlang: csharp, javascript, php, python, ruby-+ # Tutorial: Send email and invoke other business processes from App Service
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md
description: Learn how to use WebJobs to run background tasks in Azure App Servi
ms.assetid: af01771e-54eb-4aea-af5f-f883ff39572b Last updated 7/30/2023--++ #Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. adobe-target: true
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md
description: This article tells how to use Azure AD within Azure Automation as t
Last updated 05/26/2023 -+ # Use Azure AD to authenticate to Azure
automation Context Switching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/context-switching.md
description: This article explains context switching and how to avoid runbook is
Previously updated : 09/27/2021 Last updated : 08/18/2023 #Customer intent: As a developer, I want to understand Azure context so that I can avoid error when running multiple runbooks.
While you may not come across an issue if you don't follow these recommendations
`The subscription named <subscription name> cannot be found.` ```error
-Get-AzVM : The client '<automation-runas-account-guid>' with object id '<automation-runas-account-guid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
+Get-AzVM : The client '<clientid>' with object id '<objectid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
ErrorCode: AuthorizationFailed StatusCode: 403 ReasonPhrase: Forbidden Operation
automation Manage Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md
description: This article tells how to use Azure Automation to manage Office 365
Last updated 11/05/2020 + # Manage Office 365 services
To publish and then schedule your runbook, see [Manage runbooks in Azure Automat
* For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md). * For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md). * If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
-* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
+* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
automation Runbook Input Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md
Title: Configure runbook input parameters in Azure Automation
description: This article tells how to configure runbook input parameters, which allow data to be passed to a runbook when it's started. Previously updated : 05/26/2023 Last updated : 08/18/2023
You can configure input parameters for PowerShell, PowerShell Workflow, graphica
You assign values to the input parameters for a runbook when you start it. You can start a runbook from the Azure portal, a web service, or PowerShell. You can also start one as a child runbook that is called inline in another runbook.
-### Configure input parameters in PowerShell runbooks
+## Configure input parameters in PowerShell runbooks
PowerShell and PowerShell Workflow runbooks in Azure Automation support input parameters that are defined through the following properties.
To illustrate the configuration of input parameters for a graphical runbook, let
A graphical runbook uses these major runbook activities:
-* Configuration of the Azure Run As account to authenticate with Azure.
+* Authenticate with Azure using managed identity configured for automation account.
* Definition of a [Get-AzVM](/powershell/module/az.compute/get-azvm) cmdlet to get VM properties. * Use of the [Write-Output](/powershell/module/microsoft.powershell.utility/write-output) activity to output the VM names.
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
description: This article tells how to troubleshoot and resolve issues that aris
Last updated 04/26/2023 -+ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
Title: Troubleshoot Azure Automation runbook issues description: This article tells how to troubleshoot and resolve issues with Azure Automation runbooks. Previously updated : 03/06/2023 Last updated : 08/18/2023
It fails with the following error:
### Cause
-The naming convention is not being followed. Ensure that your runbook name starts with a letter and can contain letters, numbers, underscores, and dashes. The naming convention requirements are now being enforced starting with the Az module version 1.9 through the portal and cmdlets.
+Code that was introduced in [1.9.0](https://www.powershellgallery.com/packages/Az.Automation/1.9.0) version of the Az.Automation module verifies the names of the runbooks to start and incorrectly flags runbooks with multiple "-" characters or with an "_" character in the name as invalid.
### Workaround
-We recommend that you follow the runbook naming convention or revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module where the naming convention isn't enforced.
+We recommend that you revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module.
+### Resolution
+
+Currently, we are working to deploy a fix to address this issue.
## Diagnose runbook issues
To determine what's wrong, follow these steps:
1. If the error appears to be transient, try adding retry logic to your authentication routine to make authenticating more robust. ```powershell
- # Get the connection "AzureRunAsConnection"
- $connectionName = "AzureRunAsConnection"
- $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName
- $logonAttempt = 0 $logonResult = $False
To determine what's wrong, follow these steps:
$LogonAttempt++ #Logging in to Azure... $connectionResult = Connect-AzAccount `
- -ServicePrincipal `
- -Tenant $servicePrincipalConnection.TenantId `
- -ApplicationId $servicePrincipalConnection.ApplicationId `
- -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
- Start-Sleep -Seconds 30 } ```
The runbook isn't using the correct context when running. This may be because th
You may see errors like this one: ```error
-Get-AzVM : The client '<automation-runas-account-guid>' with object id '<automation-runas-account-guid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
+Get-AzVM : The client '<client-id>' with object id '<object-id> does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '.
ErrorCode: AuthorizationFailed StatusCode: 403 ReasonPhrase: Forbidden Operation
To use a service principal with Azure Resource Manager cmdlets, see [Creating se
Your runbook fails with an error similar to the following example: ```error
-Exception: A task was canceled.
+Exception: A task was cancelled.
``` ### Cause
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
You can use an online tool to base64 encode your desired username and password o
PowerShell ```console
-[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('<your string to encode here>'))
+[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>'))
#Example
-#[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('example'))
+#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example'))
```
azure-arc Create Postgresql Server Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-kubernetes-native-tools.md
To create a PostgreSQL server using Kubernetes tools, you will need to have the
## Overview
-To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the _postgresqls_ custom resource definitions.
+To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the `postgresqls` custom resource definitions.
## Create a yaml file
You can use an online tool to base64 encode your desired username and password o
PowerShell ```console
-[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('<your string to encode here>'))
+[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>'))
#Example
-#[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('example'))
+#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example'))
```
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros
### Version support
-The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension.
-
-Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported.
+The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported.
> [!NOTE] > If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible.
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 04/17/2023 Last updated : 08/17/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
-### 1.7.3 (April 2023)
+> [!IMPORTANT]
+> Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+
+### 1.7.5 (August 2023)
+
+Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
+
+- source-controller: v1.0.1
+- kustomize-controller: v1.0.1
+- helm-controller: v0.35.0
+- notification-controller: v1.0.0
+- image-automation-controller: v0.35.0
+- image-reflector-controller: v0.29.1
+
+Changes made for this version:
+
+- Upgrades Flux to [v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
+- Promotes some APIs to v1. This change should not affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+- Adds support for [Helm drift detection](tutorial-use-gitops-flux2.md#helm-drift-detection) and [OOM watch](tutorial-use-gitops-flux2.md#helm-oom-watch).
+
+### 1.7.4 (June 2023)
Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.
Changes made for this version: -- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)-- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`-- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled) -
-### 1.7.0 (March 2023)
-
-Flux version: [Release v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)
+- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` will be set to `true` for all Flux configurations, and `postBuild` will be null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55))
-- source-controller: v0.34.0-- kustomize-controller: v0.33.0-- helm-controller: v0.29.0-- notification-controller: v0.31.0-- image-automation-controller: v0.29.0-- image-reflector-controller: v0.24.0-
-Changes made for this version:
+- Adds support for optional properties [`waitForReconciliation`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1299C14-L1299C35) and [`reconciliationWaitDuration`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1304).
-- Upgrades Flux to [v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)-- Flux extension is now supported on ARM64-based clusters
+ By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment will wait for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72))
-### 1.6.4 (February 2023)
-
-Changes made for this version:
--- Disabled extension reconciler (which attempts to restore the Flux extension if it fails). This resolves a potential bug where, if the reconciler is unable to recover a failed Flux extension and `prune` is set to `true`, the extension and deployed objects may be deleted.-
-### 1.6.3 (December 2022)
+### 1.7.3 (April 2023)
-Flux version: [Release v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0)
+Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
-- source-controller: v0.32.1-- kustomize-controller: v0.31.0-- helm-controller: v0.27.0-- notification-controller: v0.29.0-- image-automation-controller: v0.27.0-- image-reflector-controller: v0.23.0
+- source-controller: v0.36.1
+- kustomize-controller: v0.35.1
+- helm-controller: v0.31.2
+- notification-controller: v0.33.0
+- image-automation-controller: v0.31.0
+- image-reflector-controller: v0.26.1
Changes made for this version: -- Upgrades Flux to [v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0)-- Adds exception for [aad-pod-identity in flux extension](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-azure-ad-pod-identity-enabled)-- Enables reconciler for flux extension
+- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
+- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`
+- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled)
## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
azure-arc Monitor Gitops Flux 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md
Title: Monitor GitOps (Flux v2) status and activity Previously updated : 08/11/2023 Last updated : 08/17/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2.
Follow these steps to import dashboards that let you monitor Flux extension depl
> [!NOTE] > These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana.
-1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Reader** level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance.
-1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it a Reader role on the subscription(s):
+1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Grafana Editor** level permissions to view and edit dashboards. You can check your access by going to **Access control (IAM)** on the Grafana instance.
+1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it the **Monitoring Reader** role on the subscription(s):
1. In the Azure portal, navigate to the subscription that you want to add. 1. Select **Access control (IAM)**. 1. Select **Add role assignment**.
- 1. Select the **Reader** role, then select **Next**.
+ 1. Select the **Monitoring Reader** role, then select **Next**.
1. On the **Members** tab, select **Managed identity**, then choose **Select members**. 1. From the **Managed identity** list, select the subscription where you created your Azure Managed Grafana Instance. Then select **Azure Managed Grafana** and the name of your Azure Managed Grafana instance. 1. Select **Review + Assign**.
- If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.)
+ If you're using a service principal, grant the **Monitoring Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.)
1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data. 1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json).
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md
Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 05/23/2023 Last updated : 08/21/2023 description: "Learn about the latest releases of Arc-enabled Kubernetes."
Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date
> > We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2).
+## July 2023
+
+### Arc agents - Version 1.12.5
+
+- Alpine base image powering our Arc agent containers has been updated from 3.7.12 to 3.18.0
+ ## May 2023 ### Arc agents - Version 1.11.7
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 06/29/2023 Last updated : 08/16/2023
To deploy applications using GitOps with Flux v2, you need:
#### For Azure Arc-enabled Kubernetes clusters
-* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023).
+* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops).
[Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
False whl k8s-extension C:\Users\somename\.azure\c
#### For Azure Arc-enabled Kubernetes clusters
-* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023).
+* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops).
[Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
spec:
When you use this annotation, the deployed HelmRelease is patched with the reference to the configured source. Currently, only `GitRepository` source is supported.
+### Helm drift detection
+
+[Drift detection for Helm releases](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection ) isn't enabled by default. Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm drift detection by running the following command:
+
+```azurecli
+az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.detectDrift=true
+```
+
+### Helm OOM watch
+
+Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm OOM watch. For more information, see [Enable Helm near OOM detection](https://fluxcd.io/flux/cheatsheets/bootstrap/#enable-helm-near-oom-detection).
+
+Be sure to review potential [remediation strategies](https://fluxcd.io/flux/components/helm/helmreleases/#configuring-failure-remediation) and apply them as needed when enabling this feature.
+
+To enable OOM watch, run the following command:
+
+```azurecli
+az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.outOfMemoryWatch.enabled=true helm-controller.outOfMemoryWatch.memoryThreshold=70 helm-controller.outOfMemoryWatch.interval=700ms
+```
+
+If you don't specify values for `memoryThreshold` and `outOfMemoryWatch`, the default memory threshold is set to 95%, with the interval at which to check the memory utilization set to 500 ms.
+ ## Delete the Flux configuration and extension Use the following commands to delete your Flux configuration and, if desired, the Flux extension itself.
For AKS clusters, you can't use the Azure portal to delete the extension. Instea
az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managedClusters --yes ``` ++ ## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md).
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
In order to deploy Arc resource bridge, images need to be downloaded to the mana
## Exclusion list for no proxy
-The following table contains the list of addresses that must be excluded by using the `-noProxy` parameter in the `createconfig` command.
+If a proxy server is being used, the following table contains the list of addresses that should be excluded from proxy by configuring the `noProxy` settings.
| **IP Address** | **Reason for exclusion** | | -- | |
The following table contains the list of addresses that must be excluded by usin
The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16`. While these default values will work for many networks, you may need to add more subnet ranges and/or names to the exemption list. For example, you may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. You can achieve that by specifying the values in the `noProxy` list.
+> [!IMPORTANT]
+> When listing multiple addresses for the noproxy settings, do not add a space after the commas to separate the addresses. The addresses must immediately follow the comma.
## Next steps - Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.
The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0
+
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Arc resource bridge consists of an appliance VM that is deployed to the on-premi
To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm). + ## Networking issues ### Back-off pulling image error
When trying to set the configuration for Arc resource bridge, you may receive an
This occurs when a `.local` path is provided for a configuration setting, such as proxy, dns, datastore or management endpoint (such as vCenter). Arc resource bridge appliance VM uses Azure Linux OS, which doesn't support `.local` by default. A workaround could be to provide the IP address where applicable. + ### Azure Arc resource bridge is unreachable Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services.
When deploying the resource bridge on VMware vCenter, you specify the folder in
When deploying the resource bridge on VMware Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again.
-```
+
+```python
"Datastore.AllocateSpace" "Datastore.Browse" "Datastore.DeleteFile"
When deploying the resource bridge on VMware Vcenter, you may get an error sayin
"Resource.AssignVMToPool" "Resource.HotMigrate" "Resource.ColdMigrate"
+"Sessions.ValidateSession"
"StorageViews.View" "System.Anonymous" "System.Read"
If you don't see your problem here or you can't resolve your issue, try one of t
- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. - [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).+
azure-arc License Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md
+
+ Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012
+description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc.
Last updated : 08/18/2023+++
+# License provisioning guidelines for Extended Security Updates for Windows Server 2012
+
+Flexibility is critical when enrolling end of support infrastructure in Extended Security Updates (ESUs) through Azure Arc to receive critical patches. To give ease of options across virtualization and disaster recovery scenarios, you must first provision Windows Server 2012 Arc ESU licenses and then link those licenses to your Azure Arc-enabled servers. The linking and provisioning of licenses can be done through Azure portal, ARM templates, CLI, or Azure Policy.
+
+When provisioning WS2012 ESU licenses, you need to specify whether you'll need to select between virtual core and physical core licensing, select between standard and datacenter licensing, and attest to the number of associated cores (broken down by the number of 2-core and 16-core packs). To assist with this license provisioning process, this article provides general guidance and sample customer scenarios for planning your deployment of WS2012 ESUs through Azure Arc.
+
+## General guidance: Standard vs. Datacenter, Physical vs. Virtual Cores
+
+### Physical core licensing
+
+If you choose to license based on physical cores, the licensing requires a minimum of 16 physical cores per license. Most customers choose to license based on physical cores and select Standard or Datacenter edition to match their original Windows Server licensing. While Standard licensing can be applied to up to two virtual machines (VMs), Datacenter licensing has no limit to the number of VMs it can be applied to. Depending on the number of VMs covered, it may make sense to opt for the Datacenter license instead of the Standard license.
+
+### Virtual core licensing
+
+If you choose to license based on virtual cores, the licensing requires a minimum of eight virtual cores per Virtual Machine. There are two main scenarios where this model is advisable:
+
+1. If the VM is running on a third-party host or hyper scaler like AWS, GCP, or OCI.
+
+1. The Windows Server was licensed on a virtualization basis. In most cases, customers elect the Standard edition for virtual core-based licenses.
+
+An additional scenario (scenario 1, below) is a candidate for VM/Virtual core licensing when the WS2012 VMs are running on a newer Windows Server host (that is, Windows Server 2016 or later).
+
+> [!IMPORTANT]
+> In all cases, customers are required to attest to their conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for customers to purchase Extended Security Updates on-premises and in hosted environments. Customers will be able to purchase Extended Security Updates via Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, customers do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit.
+>
+
+## Scenario based examples: Compliant and Cost Effective Licensing
+
+### Scenario 1: Eight modern 32-core hosts (not Windows Server 2012). While each of these hosts are running four 8-core VMs, only one VM on each host is running Windows Server 2012 R2
+
+In this scenario, you can use virtual core-based licensing to avoid covering the entire host by provisioning eight Windows Server 2012 Standard licenses for eight virtual cores each and link each of those licenses to the VMs running Windows Server 2012 R2. Alternatively, you could consider consolidating your Windows Server 2012 R2 VMs into two of the hosts to take advantage of physical core-based licensing options.
+
+### Scenario 2: A branch office with four VMs, each 8-cores, on a 32-core Windows Server 2012 Standard host
+
+In this case, you should provision two WS2012 Standard licenses for 16 physical cores each and apply to the four Arc-enabled servers. Alternatively, you could provision four WS2012 Standard licenses for eight virtual cores each and apply individually to the four Arc-enabled servers.
+
+### Scenario 3: Eight physical servers in retail stores, each server is standard with eight cores each and there's no virtualization
+
+In this scenario, you should apply eight WS2012 Standard licenses for 16 physical cores each and link each license to a physical server. Note that the 16 physical core minimum applies to the provisioned licenses.
+
+### Scenario 4: Multicloud environment with 12 AWS VMs, each of which have 12 cores and are running Windows Server 2012 R2 Standard
+
+In this scenario, you should apply 12 Windows Server 2012 Standard licenses with 12 virtual cores each, and link individually to each AWS VM.
+
+### Scenario 5: Customer has already purchased the traditional Windows Server 2012 ESUs through Volume Licensing
+
+In this scenario, the Azure Arc-enabled servers that have been enrolled in Extended Security Updates through an activated MAK Key are as enrolled in ESUs in Azure portal. You have the flexibility to switch from this key-based traditional ESU model to WS2012 ESUs enabled by Azure Arc between Year 1 and Year 2.
+
+### Scenario 6: Migrating or retiring your Azure Arc-enabled servers enrolled in Windows Server 2012 ESUs
+
+In this scenario, you can deactivate or decommission the ESU Licenses associated with these servers. If only part of the server estate covered by a license no longer requires ESUs, you can modify the ESU license details to reduce the number of associated cores.
+
+### Scenario 7: 128-core Windows Server 2012 Datacenter server running between 10 and 15 Windows Server 2012 R2 VMs that get provisioned and deprovisioned regularly
+
+In this scenario, you should provision a Windows Server 2012 Datacenter license associated with 128 physical cores and link this license to the Arc-enabled Windows Server 2012 R2 VMs running on it. The deletion of the underlying VM also deletes the corresponding Arc-enabled server resource, enabling you to link another Arc-enabled server.
+
+## Next steps
+
+* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy).
+
+* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management).
+* Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent.
+* Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
If two agents use the same configuration, you will encounter inconsistent behavi
Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures.
-* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
+* Windows Server 2008 R2 SP1, 2012, 2012 R2, 2016, 2019, and 2022
* Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI * Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance))
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon
>[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md). >+ ### Advisor recommendations The **Advisor recommendations** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed.
Further information can be found on the **Recommendations** in the working pane
You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu.
-Each pricing tier has different limits for client connections, memory, and bandwidth. If your cache approaches maximum capacity for these metrics over a sustained period of time, a recommendation is created. For more information about the metrics and limits reviewed by the **Recommendations** tool, see the following table:
- | Azure Cache for Redis metric | More information | | | | | Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) |
Configuration and management of Azure Cache for Redis instances is managed by Mi
- ACL - BGREWRITEAOF - BGSAVE-- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
+- CLUSTER - Cluster write commands are disabled, but read-only cluster commands are permitted.
- CONFIG - DEBUG - MIGRATE - PSYNC - REPLICAOF
+- REPLCONF - Azure cache for Redis instances don't allow customers to add external replicas. This [command](https://redis.io/commands/replconf/) is normally only sent by servers.
- SAVE - SHUTDOWN - SLAVEOF
For more information about Redis commands, see [https://redis.io/commands](https
- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) - [Monitor Azure Cache for Redis](cache-how-to-monitor.md)+
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Previously updated : 06/29/2023 Last updated : 08/17/2023
Before you upgrade, check the Redis version of a cache by selecting **Properties
## Upgrade using Azure CLI
-To upgrade a cache from 4 to 6 using the Azure CLI, use the following command:
+To upgrade a cache from 4 to 6 using the Azure CLI that is not using Private Endpoint, use the following command.
```azurecli-interactive az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6 ```
+### Private Endpoint
+
+If Private Endpoint is enabled on the cache, use the command that is appropriate based on whether `PublicNetworkAccess` is enabled or disabled:
+
+If `PublicNetworkAccess` is enabled:
+
+```azurecli
+ az redis update --name <cacheName> --resource-group <resourceGroupName> --set publicNetworkAccess=Enabled redisVersion=6
+```
+
+If `PublicNetworkAccess` is disabled:
+
+```azurecli
+az redis update --name <cacheName> --resource-group <resourceGroupName> --set publicNetworkAccess=Disabled redisVersion=6
+```
+ ## Upgrade using PowerShell To upgrade a cache from 4 to 6 using PowerShell, use the following command:
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
<sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
-<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, support from some extensions is currently in preview, and Service Bus triggers do not yet support message settlement scenarios.
+<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, Service Bus triggers do not yet support message settlement scenarios.
<sup>5</sup> ASP.NET Core types are not supported for .NET Framework.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
For some service-specific binding types, binding data can be provided using type
| Dependency | Version requirement | |-|-|
-|[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 |
-|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.13.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 |
+|[Microsoft.Azure.Functions.Worker]| 1.18.0 or later |
+|[Microsoft.Azure.Functions.Worker.Sdk]| 1.13.0 or later |
When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`.
Each trigger and binding extension also has its own minimum version requirement,
| [Azure Service Bus][servicebus-sdk-types] | **Generally Available**<sup>2</sup> | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | [blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types
You can configure your isolated process application to emit logs directly [Appli
```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService
-dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights --prerelease
+dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights
``` You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file:
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
This section contains examples that require version 3.x of Azure Cosmos DB exten
The examples refer to a simple `ToDoItem` type: <a id="queue-trigger-look-up-id-from-json-isolated"></a>
The examples refer to a simple `ToDoItem` type:
The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection.
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: +
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
public static void Run(
The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: +
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Functions version 1.x doesn't support isolated worker process. To use the isolat
[ITableEntity]: /dotnet/api/azure.data.tables.itableentity [TableClient]: /dotnet/api/azure.data.tables.tableclient
-[TableEntity]: /dotnet/api/azure.data.tables.tableentity
[CloudTable]: /dotnet/api/microsoft.azure.cosmos.table.cloudtable
Functions version 1.x doesn't support isolated worker process. To use the isolat
[Microsoft.Azure.Cosmos.Table]: /dotnet/api/microsoft.azure.cosmos.table [Microsoft.WindowsAzure.Storage.Table]: /dotnet/api/microsoft.windowsazure.storage.table
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
[storage-4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5
-[storage-5.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0
[table-api-package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Tables/ [extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
Title: Azure Functions C# script developer reference
description: Understand how to develop Azure Functions using C# script. Previously updated : 09/15/2022 Last updated : 08/15/2023 # Azure Functions C# script (.csx) developer reference
The following table lists the .NET attributes for each binding type and the pack
> | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` |
+## Convert a C# script app to a C# project
+
+The easiest way to convert a C# script function app to a compiled C# class library project is to start with a new project. You can then, for each function, migrate the code and configuration from each run.csx file and function.json file in a function folder to a single new .cs class library code file. For example, when you have a C# script function named `HelloWorld` you'll have two files: `HelloWorld/run.csx` and `HelloWorld/function.json`. For this function, you create a code file named `HelloWorld.cs` in your new class library project.
+
+If you are using C# scripting for portal editing, you can [download the app content to your local machine](./deployment-zip-push.md#download-your-function-app-files). Choose the **Site content** option instead of **Content and Visual Studio project**. You don't need to generate a project, and don't include application settings in the download. You're defining a new development environment, and this environment shouldn't have the same permissions as your hosted app environment.
+
+These instructions show you how to convert C# script functions (which run in-process with the Functions host) to C# class library functions that run in an [isolated worker process](dotnet-isolated-process-guide.md).
+
+1. Complete the **Create a functions app project** section from your preferred quickstart:
+
+ ### [Azure CLI](#tab/azure-cli)
+ [Create a C# function in Azure from the command line](create-first-function-cli-csharp.md#create-a-local-function-project)
+ ### [Visual Studio](#tab/vs)
+ [Create your first C# function in Azure using Visual Studio](functions-create-your-first-function-visual-studio.md#create-a-function-app-project)
+ ### [Visual Studio Code](#tab/vs-code)
+ [Create your first C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md#create-an-azure-functions-project)
+
+
+
+1. If your original C# script code includes an `extensions.csproj` file or any `function.proj` files, copy the package references from these file and add them to the new project's `.csproj` file in the same `ItemGroup` with the Functions core dependencies.
+
+ >[!TIP]
+ >Conversion provides a good opportunity to update to the latest versions of your dependencies. Doing so may require additional code changes in a later step.
+
+1. Copy the contents of the original `host.json` file into the new project's `host.json` file, except for the `extensionBundles` section (compiled C# projects don't use [extension bundles](functions-bindings-register.md#extension-bundles) and you must explicitly add references to all extensions used by your functions). When merging host.json files, remember that the [`host.json`](./functions-host-json.md) schema is versioned, with most apps using version 2.0. The contents of the `extensions` section can differ based on specific versions of the binding extensions used by your functions. See individual extension reference articles to learn how to correctly configure the host.json for your specific versions.
+
+1. For any [shared files referenced by a `#load` directive](#reusing-csx-code), create a new `.cs` file for each of these shared references. It's simplest to create a new `.cs` file for each shared class definition. If there are static methods without a class, you need to define new classes for these methods.
+
+1. Perform the following tasks for each `<FUNCTION_NAME>` folder in your original project:
+
+ 1. Create a new file named `<FUNCTION_NAME>.cs`, replacing `<FUNCTION_NAME>` with the name of the folder that defined your C# script function. You can create a new function code file from one of the trigger-specific templates in the following way:
+ ### [Azure CLI](#tab/azure-cli)
+ Using the `func new --name <FUNCTION_NAME>` command and choosing the correct trigger template at the prompt.
+ ### [Visual Studio](#tab/vs)
+ Following [Add a function to your project](functions-develop-vs.md?tabs=isolated-process#add-a-function-to-your-project) in the Visual Studio guide.
+ ### [Visual Studio Code](#tab/vs-code)
+ Following [Add a function to your project](functions-develop-vs-code.md?tabs=isolated-process#add-a-function-to-your-project) in the Visual Studio Code guide.
+
+
+ 1. Copy the `using` statements from your `run.csx` file and add them to the new file. You do not need any `#r` directives.
+ 1. For any `#load` statement in your `run.csx` file, add a new `using` statement for the namespace you used for the shared code.
+ 1. In the new file, define a class for your function under the namespace you are using for the project.
+ 1. Create a new method named `RunHandler` or something similar. This new method serves as the new entry point for the function.
+ 1. Copy the static method that represents your function, along with any functions it calls, from `run.csx` into your new class as a second method. From the new method you created in the previous step, call into this static method. This indirection step is helpful for navigating any differences as you continue the upgrade. You can keep the original method exactly the same and simply control its inputs from the new context. You may need to create parameters on the new method which you then pass into the static method call. After you have confirmed that the migration has worked as intended, you can remove this extra level of indirection.
+ 1. For each binding in the `function.json` file, add the corresponding attribute to your new method. To quickly find binding examples, see [Manually add bindings based on examples](add-bindings-existing-function.md?tabs=csharp).
+ 1. Add any extension packages required by the bindings to your project, if you haven't already done so.
+
+1. Recreate any application settings required by your app in the `Values` collection of the [local.settings.json file](functions-develop-local.md#local-settings-file).
+
+1. Verify that your project runs locally:
+
+ ### [Azure CLI](#tab/azure-cli)
+ Use `func start` to run your app from the command line. For more information, see [Run functions locally](functions-run-local.md#start).
+ ### [Visual Studio](#tab/vs)
+ Follow the [Run functions locally](functions-develop-vs.md?tabs=isolated-process#run-functions-locally) section of the Visual Studio guide.
+ ### [Visual Studio Code](#tab/vs-code)
+ Follow the [Run functions locally](functions-develop-vs-code.md?tabs=csharp#run-functions-locally) section of the Visual Studio Code guide.
+
+
+
+1. Publish your project to a new function app in Azure:
+
+ ### [Azure CLI](#tab/azure-cli)
+ [Create your Azure resources](create-first-function-cli-csharp.md#create-supporting-azure-resources-for-your-function) and deploy the code project to Azure by using the `func azure functionapp publish <APP_NAME>` command. For more information, see [Deploy project files](functions-run-local.md#project-file-deployment).
+ ### [Visual Studio](#tab/vs)
+ Follow the [Publish to Azure](functions-develop-vs.md?tabs=isolated-process#publish-to-azure) section of the Visual Studio guide.
+ ### [Visual Studio Code](#tab/vs-code)
+ Follow the [Create Azure resources](functions-develop-vs-code.md?tabs=csharp#publish-to-azure) section of the Visual Studio Code guide.
+
+
+
+### Example function conversion
+
+This section shows an example of the migration for a single function.
+
+The original function in C# scripting has two files:
+- `HelloWorld/function.json`
+- `HelloWorld/run.csx`
+
+The contents of `HelloWorld/function.json` are:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "FUNCTION",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+The contents of `HelloWorld/run.csx` are:
+
+```csharp
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+
+After migrating to the isolated worker model with ASP.NET Core integration, these are replaced by a single `HelloWorld.cs`:
+
+```csharp
+using System.Net;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using Microsoft.AspNetCore.Routing;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+namespace MyFunctionApp
+{
+ public class HelloWorld
+ {
+ private readonly ILogger _logger;
+
+ public HelloWorld(ILoggerFactory loggerFactory)
+ {
+ _logger = loggerFactory.CreateLogger<HelloWorld>();
+ }
+
+ [Function("HelloWorld")]
+ public async Task<IActionResult> RunHandler([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ return await Run(req, _logger);
+ }
+
+ // From run.csx
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+ }
+ }
+}
+```
+ ## Binding configuration and examples
+This section contains references and examples for defining triggers and bindings in C# script.
+ ### Blob trigger The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
The following table explains the binding configuration properties for C# script
|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-trigger.md#connections).|
-The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
+The following example shows a blob trigger definition in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
Here's the binding data in the *function.json* file:
The following table explains the binding configuration properties for C# script
|function.json property | Description| ||-|
-|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**type** | Must be set to `timerTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
|**name** | The name of the variable that represents the timer object in function code. | |**schedule**| A [CRON expression](./functions-bindings-timer.md#ncrontab-expressions) or a [TimeSpan](./functions-bindings-timer.md#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". | |**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
The following table explains the trigger configuration properties that you set i
|**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. See [Connections](./functions-bindings-event-hubs-trigger.md#connections).|
-The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script functionthat uses the binding. The function logs the message body of the Event Hubs trigger.
+The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script function that uses the binding. The function logs the message body of the Event Hubs trigger.
The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions.
The following table explains the binding configuration properties that you set i
|function.json property | Description| ||-| |**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. |
+|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. |
|**name** | The name of the variable that represents the queue or topic message in function code. | |**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.|
The following table explains the binding configuration properties that you set i
|function.json property | Description| |||-|
-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.|
-|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. |
+|**type** |Must be set to `serviceBus`. This property is set automatically when you create the trigger in the Azure portal.|
+|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. |
|**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. | |**queueName**|Name of the queue. Set only if sending queue messages, not for a topic. |**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.|
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
When running on Windows, the Node.js version is set by the [`WEBSITE_NODE_DEFAUL
# [Linux](#tab/linux)
-When running on Windows, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI.
+When running on Linux, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI.
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
+
+ Title: Migrate Azure Cosmos DB extension for Azure Functions to version 4.x
+description: This article shows you how to upgrade your existing function apps using the Azure Cosmos DB extension version 3.x to be able to use version 4.x of the extension.
++ Last updated : 08/16/2023
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Migrate function apps from Azure Cosmos DB extension version 3.x to version 4.x
+
+This article highlights considerations for upgrading your existing Azure Functions applications that use the Azure Cosmos DB extension version 3.x to use the newer [extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4). Migrating from version 3.x to version 4.x of the Azure Cosmos DB extension has breaking changes for your application.
+
+> [!IMPORTANT]
+> On August 31, 2024 the Azure Cosmos DB extension version 3.x will be retired. The extension and all applications using the extension will continue to function, but Azure Cosmos DB will cease to provide further maintenance and support for this extension. We recommend migrating to the latest version 4.x of the extension.
+
+This article walks you through the process of migrating your function app to run on version 4.x of the Azure Cosmos DB extension. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
++
+## Update the extension version
+
+.NET Functions use bindings that are installed in the project as NuGet packages. Depending on your Functions process model, the NuGet package to update varies.
+
+|Functions process model |Azure Cosmos DB extension |Recommended version |
+||--|--|
+|[In-process model](./functions-dotnet-class-library.md)|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) |>= 4.3.0 |
+|[Isolated worker model](./dotnet-isolated-process-guide.md) |[Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB)|>= 4.4.1 |
+
+Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 4 of the Azure Cosmos DB extension.
+
+### [In-process model](#tab/in-process)
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net7.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" />
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
+
+### [Isolated worker model](#tab/isolated-worker)
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>net7.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <OutputType>Exe</OutputType>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Update="host.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ <None Update="local.settings.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ <CopyToPublishDirectory>Never</CopyToPublishDirectory>
+ </None>
+ </ItemGroup>
+</Project>
+```
++++
+## Update the extension bundle
+
+By default, [extension bundles](./functions-bindings-register.md#extension-bundles) are used by non-.NET function apps to install binding extensions. The Azure Cosmos DB version 4 extension is part of the Microsoft Azure Functions version 4 extension bundle.
+
+To update your application to use the latest extension bundle, update your `host.json`. The following `host.json` file uses version 4 of the Microsoft Azure Functions extension bundle.
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+++
+## Rename the binding attributes
+
+Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function.
+
+The following table only includes attributes that were renamed or were removed from the version 3 extension. For a full list of attributes available in the version 4 extension, visit the [attribute reference](./functions-bindings-cosmosdb-v2-trigger.md?tabs=extensionv4#attributes).
+
+|Version 3 attribute property |Version 4 attribute property |Version 4 attribute description |
+|--|--|--|
+|**ConnectionStringSetting** |**Connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](./functions-bindings-cosmosdb-v2-trigger.md#connections).|
+|**CollectionName** |**ContainerName** | The name of the container being monitored. |
+|**LeaseConnectionStringSetting** |**LeaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.|
+|**LeaseCollectionName** |**LeaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |
+|**CreateLeaseCollectionIfNotExists** |**CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Azure AD identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.|
+|**LeasesCollectionThroughput** |**LeasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. |
+|**LeaseCollectionPrefix** |**LeaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. |
+|**UseMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. |
+|**UseDefaultJsonSerialization** |*Removed* | This attribute is no longer needed as you can fully customize the serialization using built in support in the [Azure Cosmos DB version 3 .NET SDK](../cosmos-db/nosql/migrate-dotnet-v3.md#customize-serialization). |
+|**CheckpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. |
+|**CheckpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. |
++
+## Rename the binding attributes
+
+Update your binding configuration properties in the `function.json` file.
+
+The following table only includes attributes that changed or were removed from the version 3.x extension. For a full list of attributes available in the version 4 extension, visit the [attribute reference](./functions-bindings-cosmosdb-v2-trigger.md#attributes).
+
+|Version 3 attribute property |Version 4 attribute property |Version 4 attribute description |
+|--|--|--|
+|**connectionStringSetting** |**connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](./functions-bindings-cosmosdb-v2-trigger.md#connections).|
+|**collectionName** |**containerName** | The name of the container being monitored. |
+|**leaseConnectionStringSetting** |**leaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.|
+|**leaseCollectionName** |**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |
+|**createLeaseCollectionIfNotExists** |**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Azure AD identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.|
+|**leasesCollectionThroughput** |**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `createLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. |
+|**leaseCollectionPrefix** |**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. |
+|**useMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. |
+|**checkpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. |
+|**checkpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. |
+++
+## Modify your function code
+
+The Azure Functions extension version 4 is built on top of the Azure Cosmos DB .NET SDK version 3, which removed support for the [`Document` class](../cosmos-db/nosql/migrate-dotnet-v3.md#major-name-changes-from-v2-sdk-to-v3-sdk). Instead of receiving a list of `Document` objects with each function invocation, which you must then deserialize into your own object type, you can now directly receive a list of objects of your own type.
+
+This example refers to a simple `ToDoItem` type.
+
+```cs
+namespace CosmosDBSamples
+{
+ // Customize the model with your own desired properties
+ public class ToDoItem
+ {
+ public string id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+Changes to the attribute names must be made directly in the code when defining your Function.
+
+```cs
+using System.Collections.Generic;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using Microsoft.Extensions.Logging;
+
+namespace CosmosDBSamples
+{
+ public static class CosmosTrigger
+ {
+ [FunctionName("CosmosTrigger")]
+ public static void Run([CosmosDBTrigger(
+ databaseName: "databaseName",
+ containerName: "containerName",
+ Connection = "CosmosDBConnectionSetting",
+ LeaseContainerName = "leases",
+ CreateLeaseContainerIfNotExists = true)]IReadOnlyList<ToDoItem> input, ILogger log)
+ {
+ if (input != null && input.Count > 0)
+ {
+ log.LogInformation("Documents modified " + input.Count);
+ log.LogInformation("First document Id " + input[0].id);
+ }
+ }
+ }
+}
+```
++
+## Modify your function code
+
+After you update your `host.json` to use the correct extension bundle version and modify your `function.json` to use the correct attribute names, there are no further code changes required.
++
+## Next steps
+
+- [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md)
+- [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md)
+- [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md)
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
To upgrade the application, you will:
## Upgrade your local project
-The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version.
+The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. These steps assume a local C# project, and if your app is instead using C# script (`.csx` files), you should [convert to the project model](./functions-reference-csharp.md#convert-a-c-script-app-to-a-c-project) before continuing.
> [!TIP]
-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+> If you are moving to an LTS or STS version of .NET, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
### .csproj file
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
On version 1.x of the Functions runtime, your C# function app targets .NET Frame
> [!TIP] > **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 6 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade. >
-> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
::: zone-end
Migrating a C# function app from version 1.x to version 4.x of the Functions run
Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process). > [!TIP]
-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+> If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
### .csproj file
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
On version 3.x of the Functions runtime, your C# function app targets .NET Core
> [!TIP] > **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path with the longest support window from .NET. >
-> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
::: zone-end
Upgrading instructions are language dependent. If you don't see your language, c
Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process). > [!TIP]
-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+> If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
### .csproj file
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
This section provides information about how to run your function app from a pack
+ When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.
-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.
++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.++ You must maintain any SAS URLs used for deployment. When an SAS expires, the package can no longer be deployed. In this case, you must generate a new SAS and update the setting in your function app. You can eliminate this management burden by [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity). + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account.
azure-maps Creator Onboarding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md
The following steps demonstrate how to create an indoor map in your Azure Maps a
:::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-upload.png" alt-text="Screenshot showing the package upload screen of the Azure Maps Creator onboarding tool.":::
-<!--
- > [!NOTE]
- > If the manifest included in the drawing package is incomplete or contains errors, the onboarding tool will not go directly to the **Review + Create** tab, but instead goes to the tab where you are best able to address the issue.
>- 1. Once the package is uploaded, the onboarding tool uses the [Conversion service] to validate the data then convert the geometry and data from the drawing package into a digital indoor map. For more information about the conversion process, see [Convert a drawing package] in the Creator concepts article. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-conversion.png" alt-text="Screenshot showing the package conversion screen of the Azure Maps Creator onboarding tool, including the Conversion ID value.":::
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
The ability to geocode in a country/region is dependent upon the road data cover
| Burkina Faso | | | Γ£ô | Γ£ô | Γ£ô | | Burundi | | | Γ£ô | Γ£ô | Γ£ô | | Cameroon | | | Γ£ô | Γ£ô | Γ£ô |
-| Cape Verde | | | Γ£ô | Γ£ô | Γ£ô |
+| Cabo Verde | | | Γ£ô | Γ£ô | Γ£ô |
| Central African Republic | | | Γ£ô | Γ£ô | Γ£ô | | Chad | | | | Γ£ô | Γ£ô | | Congo | | | | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Qatar | ✓ | | ✓ | ✓ | ✓ | | Réunion | ✓ | ✓ | ✓ | ✓ | ✓ | | Rwanda | | | ✓ | ✓ | ✓ |
-| Saint Helena | | | | Γ£ô | Γ£ô |
+| Saint Helena, Ascension, and Tristan da Cunha | | | | Γ£ô | Γ£ô |
| São Tomé & Príncipe | | | ✓ | ✓ | ✓ | | Saudi Arabia | ✓ | | ✓ | ✓ | ✓ | | Senegal | | | ✓ | ✓ | ✓ |
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Create the web application in Azure AD for users to sign in. The web application
6. Copy the Azure AD app ID and the Azure AD tenant ID from the app registration to use in the Web SDK. Add the Azure AD app registration details and the `x-ms-client-id` from the Azure Map account to the Web SDK. ```javascript
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js" />
<script> var map = new atlas.Map("map", { center: [-122.33, 47.64],
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Your file should now look similar to the following HTML:
<meta name="viewport" content="width=device-width, user-scalable=no" /> <title>Indoor Maps App</title>
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/>
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script> <style>
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
The Azure Maps Web SDK provides a [Map Control] that enables the customization o
This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects].
+> [!IMPORTANT]
+> If you have existing applications incorporating Azure Maps using version 2 of the [Map Control], it is recomended to start using version 3. Version 3 is backwards compatible and has several benifits including [WebGL 2 Compatibility], increased performance and support for [3D terrain tiles].
+ ## Prerequisites To use the Map Control in a web page, you must have one of the following prerequisites:
You can embed a map in a web page by using the Map Control client-side JavaScrip
* Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and `stylesheet` in the `<head>` element of the HTML file: ```html
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
``` * Load the Azure Maps Web SDK source code locally using the [azure-maps-control] npm package and host it with your app. This package also includes TypeScript definitions.
You can embed a map in a web page by using the Map Control client-side JavaScrip
Then add references to the Azure Maps `stylesheet` to the `<head>` element of the file: ```html
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
``` > [!NOTE]
You can embed a map in a web page by using the Map Control client-side JavaScrip
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type="text/javascript">
Here's an example of Azure Maps with the language set to "fr-FR" and the regiona
For a list of supported languages and regional views, see [Localization support in Azure Maps].
+## WebGL 2 Compatibility
+
+Beginning with Azure Maps Web SDK 3.0, the Web SDK includes full compatibility with [WebGL 2], a powerful graphics technology that enables hardware-accelerated rendering in modern web browsers. By using WebGL 2, developers can harness the capabilities of modern GPUs to render complex maps and visualizations more efficiently, resulting in improved performance and visual quality.
+
+![Map image showing WebGL 2 Compatibility.](./media/how-to-use-map-control/webgl-2-compatability.png)
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, user-scalable=no" />
+ <title>WebGL2 - Azure Maps Web SDK Samples</title>
+ <link href=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet"/>
+ <script src=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script>
+ <script src="https://unpkg.com/deck.gl@latest/dist.min.js"></script>
+ <style>
+ html,
+ body {
+ width: 100%;
+ height: 100%;
+ padding: 0;
+ margin: 0;
+ }
+ #map {
+ width: 100%;
+ height: 100%;
+ }
+ </style>
+ </head>
+ <body>
+ <div id="map"></div>
+ <script>
+ var map = new atlas.Map("map", {
+ center: [-122.44, 37.75],
+ bearing: 36,
+ pitch: 45,
+ zoom: 12,
+ style: "grayscale_light",
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authOptions: {
+ authType: "subscriptionKey",
+ subscriptionKey: " <Your Azure Maps Key> "
+ }
+ });
+
+ // Wait until the map resources are ready.
+ map.events.add("ready", (event) => {
+ // Create a custom layer to render data points using deck.gl
+ map.layers.add(
+ new DeckGLLayer({
+ id: "grid-layer",
+ data: "https://raw.githubusercontent.com/visgl/deck.gl-data/master/website/sf-bike-parking.json",
+ cellSize: 200,
+ extruded: true,
+ elevationScale: 4,
+ getPosition: (d) => d.COORDINATES,
+ // GPUGridLayer leverages WebGL2 to perform aggregation on the GPU.
+ // For more details, see https://deck.gl/docs/api-reference/aggregation-layers/gpu-grid-layer
+ type: deck.GPUGridLayer
+ })
+ );
+ });
+
+ // A custom implementation of WebGLLayer
+ class DeckGLLayer extends atlas.layer.WebGLLayer {
+ constructor(options) {
+ super(options.id);
+ // Create an instance of deck.gl MapboxLayer which is compatible with Azure Maps
+ // https://deck.gl/docs/api-reference/mapbox/mapbox-layer
+ this._mbLayer = new deck.MapboxLayer(options);
+
+ // Create a renderer
+ const renderer = {
+ renderingMode: "3d",
+ onAdd: (map, gl) => {
+ this._mbLayer.onAdd?.(map["map"], gl);
+ },
+ onRemove: (map, gl) => {
+ this._mbLayer.onRemove?.(map["map"], gl);
+ },
+ prerender: (gl, matrix) => {
+ this._mbLayer.prerender?.(gl, matrix);
+ },
+ render: (gl, matrix) => {
+ this._mbLayer.render(gl, matrix);
+ }
+ };
+ this.setOptions({ renderer });
+ }
+ }
+ </script>
+ </body>
+</html>
+```
+
+## 3D terrain tiles
+
+Beginning with Azure Maps Web SDK 3.0, developers can take advantage of 3D terrain visualizations. This feature allows you to incorporate elevation data into your maps, creating a more immersive experience for your users. Whether it's visualizing mountain ranges, valleys, or other geographical features, the 3D terrain support brings a new level of realism to your mapping applications.
+
+The following code example demonstrates how to implement 3D terrain tiles.
+
+```html
+<!DOCTYPE html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, user-scalable=no" />
+ <title>Elevation - Azure Maps Web SDK Samples</title>
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script>
+ <style>
+ html,
+ body {
+ width: 100%;
+ height: 100%;
+ padding: 0;
+ margin: 0;
+ }
+ #map {
+ width: 100%;
+ height: 100%;
+ }
+ </style>
+ </head>
+
+ <body>
+ <div id="map"></div>
+ <script>
+ var map = new atlas.Map("map", {
+ center: [-121.7269, 46.8799],
+ maxPitch: 85,
+ pitch: 60,
+ zoom: 12,
+ style: "road_shaded_relief",
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authOptions: {
+ authType: "subscriptionKey",
+ subscriptionKey: "<Your Azure Maps Key>"
+ }
+ });
+
+ // Create a tile source for elevation data. For more information on creating
+ // elevation data & services using open data, see https://aka.ms/elevation
+ var elevationSource = new atlas.source.ElevationTileSource("elevation", {
+ url: "<tileSourceUrl>"
+ });
+
+ // Wait until the map resources are ready.
+ map.events.add("ready", (event) => {
+
+ // Add the elevation source to the map.
+ map.sources.add(elevationSource);
+
+ // Enable elevation on the map.
+ map.enableElevation(elevationSource);
+ });
+ </script>
+ </body>
+</html>
+```
+ ## Azure Government cloud support The Azure Maps Web SDK supports the Azure Government cloud. All JavaScript and CSS URLs used to access the Azure Maps Web SDK remain the same. The following tasks need to be done to connect to the Azure Government cloud version of the Azure Maps platform.
For a list of samples showing how to integrate Azure AD with Azure Maps, see:
> [!div class="nextstepaction"] > [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+[3D terrain tiles]: #3d-terrain-tiles
[authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions [Authentication with Azure Maps]: azure-maps-authentication.md [Azure Maps & Azure Active Directory Samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples
For a list of samples showing how to integrate Azure AD with Azure Maps, see:
[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components [azure-maps-control]: https://www.npmjs.com/package/azure-maps-control [Localization support in Azure Maps]: supported-languages.md
+[Map Control]: https://www.npmjs.com/package/azure-maps-control
[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
-[Map Control]: https://www.npmjs.com/package/azure-maps-control
+[WebGL 2 Compatibility]: #webgl-2-compatibility
+[WebGL 2]: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API#webgl_2
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
You can load the Azure Maps spatial IO module using one of the two options:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script>
<script type='text/javascript'>
You can load the Azure Maps spatial IO module using one of the two options:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
In the following code, the first code block creates a map and sets the enter and
<head> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type="text/javascript">
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
The following code shows how to load a map with the same view in Azure Maps alon
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
When using a Symbol layer, the data must be added to a data source, and the data
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Symbol layers in Azure Maps support custom images as well, but the image needs t
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
In Azure Maps, load the GeoJSON data into a data source and connect the data sou
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add references to the Azure Maps Map Drawing Tools JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.css" type="text/css" />
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Learn the details of how to migrate your Bing Maps application with these articl
[azure.com]: https://azure.com [Basic snap to road logic]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=basic-snap-to-road-logic [Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[free account]: https://azure.microsoft.com/free/
[free Azure account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md [Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Load a map with the same view in Azure Maps along with a map style control and z
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
For a Symbol layer, add the data to a data source. Attach the data source to the
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Symbol layers in Azure Maps support custom images as well. First, load the image
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
GeoJSON is the base data type in Azure Maps. Import it into a data source using
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Load the GeoJSON data into a data source and connect the data source to a heat m
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
The following are all settings in the **Format** pane that are available in the
|-|| | Reference layer data | The data GeoJSON file to upload to the visual as another layer within the map. The **+ Add local file** button opens a file dialog the user can use to select a GeoJSON file that has a `.json` or `.geojson` file extension. |
-> [!NOTE]
-> In this preview of the Azure Maps Power BI visual, the reference layer will only load the first 5,000 shape features to the map. This limit will be increased in a future update.
## Styling data in a reference layer
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the Map Control.
-## v3 (preview)
+## v3 (latest)
+
+### [3.0.0] (August 18, 2023)
+
+#### Bug fixes (3.0.0)
+
+- Fixed zoom control to take into account the `maxBounds` [CameraOptions].
+
+- Fixed an issue that mouse positions are shifted after a css scale transform on the map container.
+
+#### Other changes (3.0.0)
+
+- Phased out the style definition version `2022-08-05` and switched the default `styleDefinitionsVersion` to `2023-01-01`.
+
+- Added the `mvc` parameter to encompass the map control version in both definitions and style requests.
+
+#### Installation (3.0.0)
+
+The version is available on [npm][3.0.0] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0][3.0.0]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.js"></script>
+ ```
### [3.0.0-preview.10] (July 11, 2023)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
}) ```
-## v2 (latest)
+## v2
### [2.3.2] (August 11, 2023)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0
[3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10 [3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
The render coverage tables below list the countries/regions that support Azure M
| Liechtenstein | Γ£ô | | Lithuania | Γ£ô | | Luxembourg | Γ£ô |
-| Macedonia | Γ£ô |
| Malta | Γ£ô | | Moldova | Γ£ô | | Monaco | Γ£ô | | Montenegro | Γ£ô | | Netherlands | Γ£ô |
+| North Macedonia | Γ£ô |
| Norway | Γ£ô | | Poland | Γ£ô | | Portugal | Γ£ô |
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
The following tables provide coverage information for Azure Maps routing.
| Burkina Faso | Γ£ô | | | | Burundi | Γ£ô | | | | Cameroon | Γ£ô | | |
-| Cape Verde | Γ£ô | | |
+| Cabo Verde | Γ£ô | | |
| Central African Republic | Γ£ô | | | | Chad | Γ£ô | | | | Congo | Γ£ô | | |
The following tables provide coverage information for Azure Maps routing.
| Somalia | Γ£ô | | | | South Africa | Γ£ô | Γ£ô | Γ£ô | | South Sudan | Γ£ô | | |
-| St. Helena | Γ£ô | | |
+| St. Helena, Ascension, and Tristan da Cunha | Γ£ô | | |
| Sudan | Γ£ô | | | | Swaziland | Γ£ô | | | | Syria | Γ£ô | | |
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Azure Maps have been localized in variety languages across its services. The fol
| de-DE | German | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | el-GR | Greek | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | en-AU | English (Australia) | Γ£ô | Γ£ô | | | Γ£ô |
-| en-GB | English (Great Britain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| en-NZ | English (New Zealand) | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| en-GB | English (United Kingdom) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| en-US | English (USA) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | es-419 | Spanish (Latin America) | | Γ£ô | | | Γ£ô | | es-ES | Spanish (Spain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
To create the HTML:
```HTML <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
``` 3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality.
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
The following steps show you how to create and display the Map control in a web
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
The following steps show you how to create and display the Map control in a web
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
<meta charset="utf-8" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned
| Burkina Faso | Γ£ô | Γ£ô | | Γ£ô | | Burundi | Γ£ô | Γ£ô | | Γ£ô | | Cameroon | Γ£ô | Γ£ô | | Γ£ô |
-| Cape Verde | Γ£ô | Γ£ô | | Γ£ô |
+| Cabo Verde | Γ£ô | Γ£ô | | Γ£ô |
| Central African Republic | Γ£ô | Γ£ô | | Γ£ô | | Chad | Γ£ô | Γ£ô | | Γ£ô | | Comoros | Γ£ô | Γ£ô | | Γ£ô |
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the ca
```json "dependencies": {
- "azure-maps-control": "^2.2.6"
+ "azure-maps-control": "^3.0.0"
} ```
azure-maps Web Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-migration-guide.md
+
+ Title: The Azure Maps Web SDK v1 migration guide
+
+description: Find out how to migrate your Azure Maps Web SDK v1 applications to the most recent version of the Web SDK.
++ Last updated : 08/18/2023++++
+# The Azure Maps Web SDK v1 migration guide
+
+Thank you for choosing the Azure Maps Web SDK for your mapping needs. This migration guide helps you transition from version 1 to version 3, allowing you to take advantage of the latest features and enhancements.
+
+## Understand the changes
+
+Before you start the migration process, it's important to familiarize yourself with the key changes and improvements introduced in Web SDK v3. Review the [release notes] to grasp the scope of the new features.
+
+## Updating the Web SDK version
+
+### CDN
+
+If you're using CDN ([content delivery network]), update the references to the stylesheet and JavaScript within the `head` element of your HTML files.
+
+#### v1
+
+```html
+<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/css/atlas.min.css?api-version=1" type="text/css" />
+<script src="https://atlas.microsoft.com/sdk/js/atlas.min.js?api-version=1"></script>
+```
+
+#### v3
+
+```html
+<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
+```
+
+### npm
+
+If you're using [npm], update the to the latest Azure Maps control by running the following command:
+
+```shell
+npm install azure-maps-control@latest
+```
+
+## Review authentication methods (optional)
+
+To enhance security, more authentication methods are included in the Web SDK starting in version 2. The new methods include [Azure Active Directory Authentication] and [Shared Key Authentication]. For more information about Azure Maps web application security, see [Manage Authentication in Azure Maps].
+
+## Testing
+
+Comprehensive testing is essential during migration. Conduct thorough testing of your application's functionality, performance, and user experience in different browsers and devices.
+
+## Gradual Rollout
+
+Consider a gradual rollout strategy for the updated version. Release the migrated version to a smaller group of users or in a controlled environment before making it available to your entire user base.
+
+By following these steps and considering best practices, you can successfully migrate your application from Azure Maps WebSDK v1 to v3. Embrace the new capabilities and improvements offered by the latest version while ensuring a smooth and seamless transition for your users. For more information, see [Azure Maps Web SDK best practices].
+
+## Next steps
+
+Learn more about the Azure Maps Power BI visual:
+
+> [!div class="nextstepaction"]
+> [Use the Azure Maps map control]
+
+[Azure Active Directory Authentication]: how-to-secure-spa-users.md
+[Azure Maps Web SDK best practices]: web-sdk-best-practices.md
+[content delivery network]: /azure/cdn/cdn-overview
+[Manage Authentication in Azure Maps]: how-to-manage-authentication.md
+[npm]: https://www.npmjs.com/package/azure-maps-control
+[release notes]: release-notes-map-control.md
+[Shared Key Authentication]: how-to-secure-sas-app.md
+[Use the Azure Maps map control]: how-to-use-map-control.md
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
description: Find out how to create and manage action groups. Learn about notifi
Last updated 05/02/2023 -+ # Action groups
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 06/23/2023 Last updated : 08/20/2023 # Review TrackAvailability() test results
-This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor).
+This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). [Standard tests](availability-standard-tests.md) **should always be used if possible** as they require little investment, no maintenance, and have few prerequisites.
## Prerequisites > [!div class="checklist"] > - [Workspace-based Application Insights resource](create-workspace-resource.md)
-> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions.
-> - Developer expertise capable of authoring custom code for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs
+> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions
+> - Developer expertise capable of authoring [custom code](#basic-code-sample) for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs
-> [!NOTE]
-> - TrackAvailability() requires that you have made a developer investment in custom code.
-> - [Standard tests](availability-standard-tests.md) should always be used if possible as they require little investment and have few prerequisites.
+> [!IMPORTANT]
+> [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) requires making a developer investment in writing and maintanining potentially complex custom code.
## Check availability
You can use Log Analytics to view your availability results, dependencies, and m
:::image type="content" source="media/availability-azure-functions/dependencies.png" alt-text="Screenshot that shows the New Query tab with dependencies limited to 50." lightbox="media/availability-azure-functions/dependencies.png":::
+## Basic code sample
+
+The following example demonstrates a web availability test that requires a simple URL ping using the `getStringAsync()` method.
+
+```csharp
+using System.Net.Http;
+
+public async static Task RunAvailabilityTestAsync(ILogger log)
+{
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+ }
+}
+```
+
+For advanced scenarios where the business logic must be adjusted to access the URL, such as obtaining tokens, setting parameters, and other test cases, custom code is necessary.
+ ## Next steps * [Standard tests](availability-standard-tests.md)
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
# What is autoinstrumentation for Azure Monitor Application Insights?
-Autoinstrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md).
+Autoinstrumentation enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). It provides easy access to experiences such as the [application dashboard](overview-dashboard.md) and [application map](app-map.md).
+
+If your language and platform are supported, select the corresponding link in the [Supported environments, languages, and resource providers table](#supported-environments-languages-and-resource-providers) for more detailed information. In many cases, autoinstrumentation is enabled by default.
+
+## What are the autoinstrumentation advantages?
> [!div class="checklist"]
-> - No code changes are required.
-> - [SDK update](sdk-support-guidance.md) overhead is eliminated.
-> - Recommended when available.
+> - Code changes aren't required.
+> - Access to source code isn't required.
+> - Configuration changes aren't required.
+> - Ongoing [SDK update maintenance](sdk-support-guidance.md) is eliminated.
## Supported environments, languages, and resource providers
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
You can collect more data automatically when you include instrumentation librari
### [ASP.NET Core](#tab/aspnetcore)
-To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods.
+To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods.
The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 me
```csharp var builder = WebApplication.CreateBuilder(args);
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-builder.Services.Configure<ApplicationInsightsSamplerOptions>(options => { options.SamplingRatio = 0.1F; });
+builder.Services.AddOpenTelemetry().UseAzureMonitor(o =>
+{
+ o.SamplingRatio = 0.1F;
+});
var app = builder.Build();
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Title: Get started with autoscale in Azure description: "Learn how to scale your resource web app, cloud service, virtual machine, or Virtual Machine Scale Set in Azure."-++ Last updated 04/10/2023
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following types of data collected from a Kubernetes cluster with Container i
- Container environment variables from every monitored container in the cluster - Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics-- [Diagnostic log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
+- [Resource log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
## Estimating costs to monitor your AKS cluster
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
+
+ Title: Configure hybrid Kubernetes clusters with Container insights | Microsoft Docs
+description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environments.
+ Last updated : 08/21/2023+++
+# Configure hybrid Kubernetes clusters with Container insights
+
+Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience.
+
+## Supported configurations
+
+The following configurations are officially supported with Container insights. If you have a different version of Kubernetes and operating system versions, please open a support ticket..
+
+- Environments:
+ - Kubernetes on-premises.
+ - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview).
+ - [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or in other cloud environments.
+- Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md).
+- The following container runtimes are supported: Moby and CRI compatible runtimes such CRI-O and ContainerD.
+- The Linux OS release for main and worker nodes supported are Ubuntu (18.04 LTS and 16.04 LTS) and Red Hat Enterprise Linux CoreOS 43.81.
+- Azure Access Control service supported: Kubernetes role-based access control (RBAC) and non-RBAC.
+
+## Prerequisites
+
+Before you start, make sure that you meet the following prerequisites:
+
+- You have a [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or the [Azure portal](../logs/quick-create-workspace.md).
+
+ >[!NOTE]
+ >Enabling the monitoring of multiple clusters with the same cluster name to the same Log Analytics workspace isn't supported. Cluster names must be unique.
+ >
+
+- You're a member of the Log Analytics contributor role to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md).
+- To view the monitoring data, you must have the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
+- You have a [Helm client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.
+- The following proxy and firewall configuration information is required for the containerized version of the Log Analytics agent for Linux to communicate with Azure Monitor:
+
+ |Agent resource|Ports |
+ |||
+ |*.ods.opinsights.azure.com |Port 443 |
+ |*.oms.opinsights.azure.com |Port 443 |
+ |*.dc.services.visualstudio.com |Port 443 |
+
+- The containerized agent requires the Kubelet `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend that you configure `secure port: 10250` on the Kubelet cAdvisor if it isn't configured already.
+- The containerized agent requires the following environmental variables to be specified on the container to communicate with the Kubernetes API service within the cluster to collect inventory data: `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`.
+
+>[!IMPORTANT]
+>The minimum agent version supported for monitoring hybrid Kubernetes clusters is *ciprod10182019* or later.
+
+## Enable monitoring
+
+To enable Container insights for the hybrid Kubernetes cluster:
+
+1. Configure your Log Analytics workspace with the Container insights solution.
+
+1. Enable the Container insights Helm chart with a Log Analytics workspace.
+
+For more information on monitoring solutions in Azure Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions).
+
+### Add the Azure Monitor Containers solution
+
+You can deploy the solution with the provided Azure Resource Manager template by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with the Azure CLI.
+
+If you're unfamiliar with the concept of deploying resources by using a template, see:
+
+- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md)
+- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md)
+
+If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+This method includes two JSON templates. One template specifies the configuration to enable monitoring. The other template contains parameter values that you configure to specify:
+
+- `workspaceResourceId`: The full resource ID of your Log Analytics workspace.
+- `workspaceRegion`: The region the workspace is created in, which is also referred to as **Location** in the workspace properties when you view them from the Azure portal.
+
+To first identify the full resource ID of your Log Analytics workspace that's required for the `workspaceResourceId` parameter value in the *containerSolutionParams.json* file, perform the following steps. Then run the PowerShell cmdlet or Azure CLI command to add the solution.
+
+1. List all the subscriptions to which you have access by using the following command:
+
+ ```azurecli
+ az account list --all -o table
+ ```
+
+ The output will resemble the following example:
+
+ ```azurecli
+ Name CloudName SubscriptionId State IsDefault
+ -- - --
+ Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True
+ ```
+
+ Copy the value for **SubscriptionId**.
+
+1. Switch to the subscription hosting the Log Analytics workspace by using the following command:
+
+ ```azurecli
+ az account set -s <subscriptionId of the workspace>
+ ```
+
+1. The following example displays the list of workspaces in your subscriptions in the default JSON format:
+
+ ```azurecli
+ az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json
+ ```
+
+ In the output, find the workspace name. Then copy the full resource ID of that Log Analytics workspace under the field **ID**.
+
+1. Copy and paste the following JSON syntax into your file:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Azure Monitor Log Analytics Workspace Resource ID"
+ }
+ },
+ "workspaceRegion": {
+ "type": "string",
+ "metadata": {
+ "description": "Azure Monitor Log Analytics Workspace region"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "name": "[Concat('ContainerInsights', '-', uniqueString(parameters('workspaceResourceId')))]",
+ "apiVersion": "2017-05-10",
+ "subscriptionId": "[split(parameters('workspaceResourceId'),'/')[2]]",
+ "resourceGroup": "[split(parameters('workspaceResourceId'),'/')[4]]",
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "apiVersion": "2015-11-01-preview",
+ "type": "Microsoft.OperationsManagement/solutions",
+ "location": "[parameters('workspaceRegion')]",
+ "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]",
+ "properties": {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]"
+ },
+ "plan": {
+ "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]",
+ "product": "[Concat('OMSGallery/', 'ContainerInsights')]",
+ "promotionCode": "",
+ "publisher": "Microsoft"
+ }
+ }
+ ]
+ },
+ "parameters": {}
+ }
+ }
+ ]
+ }
+ ```
+
+1. Save this file as **containerSolution.json** to a local folder.
+
+1. Paste the following JSON syntax into your file:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaceResourceId": {
+ "value": "<workspaceResourceId>"
+ },
+ "workspaceRegion": {
+ "value": "<workspaceRegion>"
+ }
+ }
+ }
+ ```
+
+1. Edit the values for **workspaceResourceId** by using the value you copied in step 3. For **workspaceRegion**, copy the **Region** value after running the Azure CLI command [az monitor log-analytics workspace show](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-list&preserve-view=true).
+
+1. Save this file as **containerSolutionParams.json** to a local folder.
+
+1. You're ready to deploy this template.
+
+ - To deploy with Azure PowerShell, use the following commands in the folder that contains the template:
+
+ ```powershell
+ # configure and login to the cloud of Log Analytics workspace.Specify the corresponding cloud environment of your workspace to below command.
+ Connect-AzureRmAccount -Environment <AzureCloud | AzureChinaCloud | AzureUSGovernment>
+ ```
+
+ ```powershell
+ # set the context of the subscription of Log Analytics workspace
+ Set-AzureRmContext -SubscriptionId <subscription Id of Log Analytics workspace>
+ ```
+
+ ```powershell
+ # execute deployment command to add Container Insights solution to the specified Log Analytics workspace
+ New-AzureRmResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <resource group of Log Analytics workspace> -TemplateFile .\containerSolution.json -TemplateParameterFile .\containerSolutionParams.json
+ ```
+
+ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
+
+ ```powershell
+ provisioningState : Succeeded
+ ```
+
+ - To deploy with the Azure CLI, run the following commands:
+
+ ```azurecli
+ az login
+ az account set --name <AzureCloud | AzureChinaCloud | AzureUSGovernment>
+ az login
+ az account set --subscription "Subscription Name"
+ # execute deployment command to add container insights solution to the specified Log Analytics workspace
+ az deployment group create --resource-group <resource group of log analytics workspace> --name <deployment name> --template-file ./containerSolution.json --parameters @./containerSolutionParams.json
+ ```
+
+ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result:
+
+ ```azurecli
+ provisioningState : Succeeded
+ ```
+
+ After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster.
+
+## Install the Helm chart
+
+In this section, you install the containerized agent for Container insights. Before you proceed, identify the workspace ID required for the `amalogsagent.secret.wsid` parameter and the primary key required for the `amalogsagent.secret.key` parameter. To identify this information, follow these steps and then run the commands to install the agent by using the Helm chart.
+
+1. Run the following command to identify the workspace ID:
+
+ `az monitor log-analytics workspace list --resource-group <resourceGroupName>`
+
+ In the output, find the workspace name under the field **name**. Then copy the workspace ID of that Log Analytics workspace under the field **customerID**.
+
+1. Run the following command to identify the primary key for the workspace:
+
+ `az monitor log-analytics workspace get-shared-keys --resource-group <resourceGroupName> --workspace-name <logAnalyticsWorkspaceName>`
+
+ In the output, find the primary key under the field **primarySharedKey** and then copy the value.
+
+ >[!NOTE]
+ >The following commands are applicable only for Helm version 2. Use of the `--name` parameter isn't applicable with Helm version 3.
+
+ If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster doesn't communicate through a proxy server, you don't need to specify this parameter. For more information, see the section [Configure the proxy endpoint](#configure-the-proxy-endpoint) later in this article.
+
+1. Add the Azure charts repository to your local list by running the following command:
+
+ ```
+ helm repo add microsoft https://microsoft.github.io/charts/repo
+ ````
+
+1. Install the chart by running the following command:
+
+ ```
+ $ helm install --name myrelease-1 \
+ --set amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<my_prod_cluster> microsoft/azuremonitor-containers
+ ```
+
+ If the Log Analytics workspace is in Azure China 21Vianet, run the following command:
+
+ ```
+ $ helm install --name myrelease-1 \
+ --set amalogsagent.domain=opinsights.azure.cn,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+ ```
+
+ If the Log Analytics workspace is in Azure US Government, run the following command:
+
+ ```
+ $ helm install --name myrelease-1 \
+ --set amalogsagent.domain=opinsights.azure.us,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers
+ ```
+
+### Enable the Helm chart by using the API model
+
+You can specify an add-on in the AKS Engine cluster specification JSON file, which is also referred to as the API model. In this add-on, provide the base64-encoded version of `WorkspaceGUID` and `WorkspaceKey` of the Log Analytics workspace where the collected monitoring data is stored. You can find `WorkspaceGUID` and `WorkspaceKey` by using steps 1 and 2 in the previous section.
+
+Supported API definitions for the Azure Stack Hub cluster can be found in the example [kubernetes-container-monitoring_existing_workspace_id_and_key.json](https://github.com/Azure/aks-engine/blob/master/examples/addons/container-monitoring/kubernetes-container-monitoring_existing_workspace_id_and_key.json). Specifically, find the **addons** property in **kubernetesConfig**:
+
+```json
+"orchestratorType": "Kubernetes",
+ "kubernetesConfig": {
+ "addons": [
+ {
+ "name": "container-monitoring",
+ "enabled": true,
+ "config": {
+ "workspaceGuid": "<Azure Log Analytics Workspace Id in Base-64 encoded>",
+ "workspaceKey": "<Azure Log Analytics Workspace Key in Base-64 encoded>"
+ }
+ }
+ ]
+ }
+```
+
+## Configure agent data collection
+
+Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-agent-config.md).
+
+After you've successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal.
+
+>[!NOTE]
+>Ingestion latency is around 5 to 10 minutes from the agent to commit in the Log Analytics workspace. Status of the cluster shows the value **No data** or **Unknown** until all the required monitoring data is available in Azure Monitor.
+
+## Configure the proxy endpoint
+
+Starting with chart version 2.7.1, the chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. In this way, it can communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server. Both anonymous and basic authentication with a username and password are supported.
+
+The proxy configuration value has the syntax `[protocol://][user:password@]proxyhost[:port]`.
+
+> [!NOTE]
+>If your proxy server doesn't require authentication, you still need to specify a pseudo username and password. It can be any username or password.
+
+|Property| Description |
+|--|-|
+|protocol | HTTP or HTTPS |
+|user | Optional username for proxy authentication |
+|password | Optional password for proxy authentication |
+|proxyhost | Address or FQDN of the proxy server |
+|port | Optional port number for the proxy server |
+
+An example is `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`.
+
+If you specify the protocol as **http**, the HTTP requests are created by using an SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols.
+
+## Troubleshooting
+
+If you encounter an error while you attempt to enable monitoring for your hybrid Kubernetes cluster, copy the PowerShell script [TroubleshootError_nonAzureK8s.ps1](https://aka.ms/troubleshoot-non-azure-k8s) and save it to a folder on your computer. This script is provided to help you detect and fix the issues you encounter. It's designed to detect and attempt correction of the following issues:
+
+- The specified Log Analytics workspace is valid.
+- The Log Analytics workspace is configured with the Container insights solution. If not, configure the workspace.
+- The Azure Monitor Agent replicaset pods are running.
+- The Azure Monitor Agent daemonset pods are running.
+- The Azure Monitor Agent Health service is running.
+- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace that the insight is configured with.
+- Validate that all the Linux worker nodes have the `kubernetes.io/role=agent` label to the schedulers pod. If it doesn't exist, add it.
+- Validate that `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster.
+
+To execute with Azure PowerShell, use the following commands in the folder that contains the script:
+
+```powershell
+.\TroubleshootError_nonAzureK8s.ps1 - azureLogAnalyticsWorkspaceResourceId </subscriptions/<subscriptionId>/resourceGroups/<resourcegroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName> -kubeConfig <kubeConfigFile> -clusterContextInKubeconfig <clusterContext>
+```
+
+## Next steps
+
+Now that monitoring is enabled to collect health and resource utilization of your hybrid Kubernetes clusters and workloads are running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
# View Kubernetes logs, events, and pod metrics in real time
-Container insights includes the Live Data feature. You can use this advanced diagnostic feature for direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time.
+The Live Data feature in Container insights gives you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time.
> [!NOTE] > AKS uses [Kubernetes cluster-level logging architectures](https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures). You can use tools such as Fluentd or Fluent Bit to collect logs.
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
The required tables for this chart include KubeNodeInventory, KubePodInventory,
| project ClusterName, NodeName, LastReceivedDateTime, Status, ContainerCount, UpTimeMs = UpTimeMs_long, Aggregation = Aggregation_real, LimitValue = LimitValue_real, list_TrendPoint, Labels, ClusterId ```
-## Resource logs
-
-For details on resource logs for AKS clusters, see [Collect control plane logs](../../aks/monitor-aks.md#resource-logs).
-- ## Prometheus metrics The following examples requires the configuration described in [Send Prometheus metrics to Log Analytics workspace with Container insights](container-insights-prometheus-logs.md).
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Container insights uses a containerized version of the Log Analytics agent for L
Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes.
-If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://aka.ms/ci-logs-agent-release-notes).
### Upgrade the agent on an AKS cluster
With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to
## Next steps If you experience issues when you upgrade the agent, review the [troubleshooting guide](container-insights-troubleshoot.md) for support.+
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
The following metrics have unique behavior characteristics:
- The `oomKilledContainerCount` metric is only sent when there are OOM killed containers. - The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. - The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. **Prometheus only**
azure-monitor Container Insights Optout Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md
+
+ Title: Disable Container insights on your hybrid Kubernetes cluster
+description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights.
+ Last updated : 08/21/2023+++
+# Disable Container insights on your hybrid Kubernetes cluster
+
+This article shows how to disable Container insights for the following Kubernetes environments:
+
+- AKS Engine on Azure and Azure Stack
+- OpenShift version 4 and higher
+- Azure Arc-enabled Kubernetes (preview)
+
+## How to stop monitoring using Helm
+
+The following steps apply to the following environments:
+
+- AKS Engine on Azure and Azure Stack
+- OpenShift version 4 and higher
+
+1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command.
+
+ ```
+ helm list
+ ```
+
+ The output resembles the following:
+
+ ```
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1
+ ```
+
+ *azmon-containers-release-1* represents the helm chart release for Container insights.
+
+2. To delete the chart release, run the following helm command.
+
+ `helm delete <releaseName>`
+
+ Example:
+
+ `helm delete azmon-containers-release-1`
+
+ This removes the release from the cluster. You can verify by running the `helm list` command:
+
+ ```
+ NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+ ```
+
+The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`.
+
+## How to stop monitoring on Azure Arc-enabled Kubernetes
+
+### Using PowerShell
+
+1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
+
+ ```powershell
+ wget https://aka.ms/disable-monitoring-powershell-script -OutFile disable-monitoring.ps1
+ ```
+
+2. Configure the `$azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
+
+ ```powershell
+ $azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
+ ```
+
+3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`.
+
+ ```powershell
+ $kubeContext = "<kubeContext name of your k8s cluster>"
+ ```
+
+4. Run the following command to stop monitoring the cluster.
+
+ ```powershell
+ .\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext
+ ```
+
+#### Using service principal
+The script *disable-monitoring.ps1* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass $servicePrincipalClientId, $servicePrincipalClientSecret and $tenantId parameters with values of service principal you have intended to use to enable-monitoring.ps1 script.
+
+```powershell
+$subscriptionId = "<subscription Id of the Azure Arc-connected cluster resource>"
+$servicePrincipal = New-AzADServicePrincipal -Role Contributor -Scope "/subscriptions/$subscriptionId"
+
+$servicePrincipalClientId = $servicePrincipal.ApplicationId.ToString()
+$servicePrincipalClientSecret = [System.Net.NetworkCredential]::new("", $servicePrincipal.Secret).Password
+$tenantId = (Get-AzSubscription -SubscriptionId $subscriptionId).TenantId
+```
+
+For example:
+
+```powershell
+\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext -servicePrincipalClientId $servicePrincipalClientId -servicePrincipalClientSecret $servicePrincipalClientSecret -tenantId $tenantId
+```
++
+### Using bash
+
+1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands:
+
+ ```bash
+ curl -o disable-monitoring.sh -L https://aka.ms/disable-monitoring-bash-script
+ ```
+
+2. Configure the `azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource.
+
+ ```bash
+ export AZUREARCCLUSTERRESOURCEID="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>"
+ ```
+
+3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`.
+
+ ```bash
+ export KUBECONTEXT="<kubeContext name of your k8s cluster>"
+ ```
+
+4. To stop monitoring your cluster, there are different commands provided based on your deployment scenario.
+
+ Run the following command to stop monitoring the cluster using the current context.
+
+ ```bash
+ bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID
+ ```
+
+ Run the following command to stop monitoring the cluster by specifying a context
+
+ ```bash
+ bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT
+ ```
+
+#### Using service principal
+The bash script *disable-monitoring.sh* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass --client-id, --client-secret and --tenant-id values of service principal you have intended to use to *enable-monitoring.sh* bash script.
+
+```bash
+SUBSCRIPTIONID="<subscription Id of the Azure Arc-connected cluster resource>"
+SERVICEPRINCIPAL=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTIONID}")
+SERVICEPRINCIPALCLIENTID=$(echo $SERVICEPRINCIPAL | jq -r '.appId')
+
+SERVICEPRINCIPALCLIENTSECRET=$(echo $SERVICEPRINCIPAL | jq -r '.password')
+TENANTID=$(echo $SERVICEPRINCIPAL | jq -r '.tenant')
+```
+
+For example:
+
+```bash
+bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT --client-id $SERVICEPRINCIPALCLIENTID --client-secret $SERVICEPRINCIPALCLIENTSECRET --tenant-id $TENANTID
+```
+
+## Next steps
+
+If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md).
azure-monitor Container Insights Optout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md
Title: Stop monitoring your Azure Kubernetes Service cluster | Microsoft Docs
+ Title: Disable Container insights on your Azure Kubernetes Service (AKS) cluster
description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights. Previously updated : 05/24/2022 Last updated : 08/21/2023 ms.devlang: azurecli
-# Stop monitoring your Azure Kubernetes Service cluster with Container insights
+# Disable Container insights on your Azure Kubernetes Service (AKS) cluster
After you enable monitoring of your Azure Kubernetes Service (AKS) cluster, you can stop monitoring the cluster if you decide you no longer want to monitor it. This article shows you how to do this task by using the Azure CLI or the provided Azure Resource Manager templates (ARM templates).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights in Azure Monitor
description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. Previously updated : 09/28/2022 Last updated : 08/14/2023 # Container insights overview
-Container insights is a feature designed to monitor the performance of container workloads deployed to the cloud. It gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md).
+Container insights is a feature of Azure Monitor that monitors the performance and health of container workloads deployed to [Azure](../../aks/intro-kubernetes.md) or that are managed by [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). It collects memory and processor metrics from controllers, nodes, and containers in addition to gathering container logs. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and pre-built [workbooks](container-insights-reports.md).
+The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
+> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
## Features of Container insights
-Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can:
+Container insights includes the following features to provide to understand the performance and health of your Kubernetes cluster and container workloads:
-- Identify resource bottlenecks by identifying AKS containers running on the node and their processor and memory utilization.-- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.
+- Identify resource bottlenecks by identifying containers running on each node and their processor and memory utilization.
+- Identify processor and memory utilization of container groups and their containers hosted in container instances.
- View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.
+- Access live container logs and metrics generated by the container engine to help with troubleshooting issues in real time.
- Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://aka.ms/azureprometheus-promio-docs) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis.
-The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*.
+## Access Container insights
-> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
+Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
-## Access Container insights
+## Data collected
+Container insights sends data to [Logs](../logs/data-platform-logs.md) and [Metrics](../essentials/data-platform-metrics.md) where you can analyze it using different features of Azure Monitor. It works with other Azure services such as [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and [Managed Grafana](../../managed-grafan#monitoring-data).
-Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page.
## Supported configurations
+Container insights supports the following configurations:
-- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).
+- [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).
- [Azure Container Instances](../../container-instances/container-instances-overview.md). - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises. - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md).
Container insights supports clusters running the Linux and Windows Server 2019 o
>[!NOTE] > Container insights support for Windows Server 2022 operating system is in public preview. ++ ## Next steps To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Title: Monitor Azure Kubernetes Service (AKS) with Azure Monitor
-description: Describes how to use Azure Monitor monitor the health and performance of AKS clusters and their workloads.
+ Title: Monitor Kubernetes clusters using Azure services and cloud native tools
+description: Describes how to monitor the health and performance of the different layers of your Kubernetes environment using Azure Monitor and cloud native services in Azure.
- Previously updated : 03/08/2023 Last updated : 08/17/2023
-# Monitor Azure Kubernetes Service (AKS) with Azure Monitor
+# Monitor Kubernetes clusters using Azure services and cloud native tools
-This article describes how to use Azure Monitor to monitor the health and performance of [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+This article describes how to monitor the health and performance of your Kubernetes clusters and the workloads running on them using Azure Monitor and related Azure and cloud native services. This includes clusters running in Azure Kubernetes Service (AKS) or other clouds such as [AWS](https://aws.amazon.com/kubernetes/) and [GCP](https://cloud.google.com/kubernetes-engine). Different sets of guidance are provided for the different roles that typically manage unique components that make up a Kubernetes environment.
-The [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) defines the [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements) you should focus on for your Azure resources. This scenario focuses on health and status monitoring using Azure Monitor.
-## Scope of the scenario
+> [!IMPORTANT]
+> This article provides complete guidance on monitoring the different layers of your Kubernetes environment based on Azure Kubernetes Service (AKS) or Kubernetes clusters in other clouds. If you're just getting started with AKS or Azure Monitor, see [Monitoring AKS](../../aks/monitor-aks.md) for basic information for getting started monitoring an AKS cluster.
+
+## Layers and roles of Kubernetes environment
+
+Following is an illustration of a common model of a typical Kubernetes environment, starting from the infrastructure layer up through applications. Each layer has distinct monitoring requirements that are addressed by different services and typically managed by different roles in the organization.
++
+Responsibility for the different layers of a Kubernetes environment and the applications that depend on it are typically addressed by multiple roles. Depending on the size of your organization, these roles may be performed by different people or even different teams. The following table describes the different roles while the sections below provide the monitoring scenarios that each will typically encounter.
+
+| Roles | Description |
+|:|:|
+| [Developer](#developer) | Develop and maintaining the application running on the cluster. Responsible for application specific traffic including application performance and failures. Maintains reliability of the application according to SLAs. |
+| [Platform engineer](#platform-engineer) | Responsible for the Kubernetes cluster. Provisions and maintains the platform used by developer. |
+| [Network engineer](#network-engineer) | Responsible for traffic between workloads and any ingress/egress with the cluster. Analyzes network traffic and performs threat analysis. |
+
+## Selection of monitoring tools
+
+Azure provides a complete set of services based on [Azure Monitor](../overview.md) for monitoring the health and performance of different layers of your Kubernetes infrastructure and the applications that depend on it. These services work in conjunction with each other to provide a complete monitoring solution and are recommended both for [AKS](../../aks/intro-kubernetes.md) and your Kubernetes clusters running in other clouds. You may have an existing investment in cloud native technologies endorsed by the [Cloud Native Computing Foundation](https://www.cncf.io/), in which case you may choose to integrate Azure tools into your existing environment.
+
+Your choice of which tools to deploy and their configuration will depend on the requirements of your particular environment. For example, you may use the managed offerings in Azure for Prometheus and Grafana, or you may choose to use your existing installation of these tools with your Kubernetes clusters in Azure. Your organization may also use alternative tools to Container insights to collect and analyze Kubernetes logs, such as Splunk or Datadog.
+
+> [!IMPORTANT]
+> Monitoring a complex environment such as Kubernetes involves collecting a significant amount of telemetry, much of which incurs a cost. You should collect just enough data to meet your requirements. This includes the amount of data collected, the frequency of collection, and the retention period. If you're very cost conscious, you may choose to implement a subset of the full functionality in order to reduce your monitoring spend.
+
+## Network engineer
+The *Network Engineer* is responsible for traffic between workloads and any ingress/egress with the cluster. They analyze network traffic and perform threat analysis.
++
+### Azure services for network administrator
+
+The following table lists the services that are commonly used by the network engineer to monitor the health and performance of the Kubernetes cluster and its components.
++
+| Service | Description |
+|:|:|
+| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Suite of tools in Azure to monitor the virtual networks used by your Kubernetes clusters and diagnose detected issues. |
+| [Network insights](../../network-watcher/network-insights-overview.md) | Feature of Azure Monitor that includes a visual representation of the performance and health of different network components and provides access to the network monitoring tools that are part of Network Watcher. |
-This article does *not* include information on the following scenarios:
+[Network insights](../../network-watcher/network-insights-overview.md) is enabled by default and requires no configuration. Network Watcher is also typically [enabled by default in each Azure region](../../network-watcher/network-watcher-create.md).
-- Monitoring of Kubernetes clusters outside of Azure except for referring to existing content for Azure Arc-enabled Kubernetes-- Monitoring of AKS with tools other than Azure Monitor except to fill gaps in Azure Monitor and Container Insights
+### Monitor level 1 - Network
-> [!NOTE]
-> Azure Monitor was designed to monitor the availability and performance of cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring for AKS is done with [Microsoft Sentinel](../../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). See [Monitor virtual machines with Azure Monitor - Security monitoring](../vm/monitor-virtual-machine-security.md) for a description of the security monitoring tools in Azure and their relationship to Azure Monitor.
->
-> For information on using the security services to monitor AKS, see [Microsoft Defender for Kubernetes - the benefits and features](../../defender-for-cloud/defender-for-kubernetes-introduction.md) and [Connect Azure Kubernetes Service (AKS) diagnostics logs to Microsoft Sentinel](../../sentinel/data-connectors/azure-kubernetes-service-aks.md).
+Following are common scenarios for monitoring the network.
+
+- Create [flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md) to log information about the IP traffic flowing through network security groups used by your cluster and then use [traffic analytics](../../network-watcher/traffic-analytics.md) to analyze and provide insights on this data. You'll most likely use the same Log Analytics workspace for traffic analytics that you use for Container insights and your control plane logs.
+- Using traffic analytics, you can determine if any traffic is flowing either to or from any unexpected ports used by the cluster and also if any traffic is flowing over public IPs that shouldn't be exposed. Use this information to determine whether your network rules need modification.
++
+## Platform engineer
+
+The *platform engineer*, also known as the cluster administrator, is responsible for the Kubernetes cluster itself. They provision and maintain the platform used by developers. They need to understand the health of the cluster and its components, and be able to troubleshoot any detected issues. They also need to understand the cost to operate the cluster and potentially to be able to allocate costs to different teams.
+++
+Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations for the fleet architect are included in the guidance below.
++
+### Azure services for platform engineer
+
+The following table lists the Azure services for the platform engineer to monitor the health and performance of the Kubernetes cluster and its components.
+
+| Service | Description |
+|:|:|
+| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. It also collects metrics from the Kubernetes control plane and stores them in the workspace. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). |
+| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully-managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. |
+| [Azure Arc-enabled Kubernetes](container-insights-enable-arc-enabled-clusters.md) | Allows you to attach to Kubernetes clusters running in other clouds so that you can manage and configure them in Azure. With the Arc agent installed, you can monitor AKS and hybrid clusters together using the same methods and tools, including Container insights and Prometheus. |
+| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. |
+
+### Configure monitoring for platform engineer
+
+The sections below identify the steps for complete monitoring of your Kubernetes environment using the Azure services in the above table. Functionality and integration options are provided for each to help you determine where you may need to modify this configuration to meet your particular requirements.
++
+#### Enable scraping of Prometheus metrics
+
+> [!IMPORTANT]
+> To use Azure Monitor managed service for Prometheus, you need to have an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). For information on design considerations for a workspace configuration, see [Azure Monitor workspace architecture](../essentials/azure-monitor-workspace-overview.md#azure-monitor-workspace-architecture).
-## Container Insights
+Enable scraping of Prometheus metrics by Azure Monitor managed service for Prometheus from your cluster using one of the following methods:
-AKS generates [platform metrics and resource logs](../../aks/monitor-aks-reference.md) that you can use to monitor basic health and performance. Enable [Container Insights](container-insights-overview.md) to expand on this monitoring. Container Insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS and provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios.
+- Select the option **Enable Prometheus metrics** when you [create an AKS cluster](../../aks/learn/quick-kubernetes-deploy-portal.md).
+- Select the option **Enable Prometheus metrics** when you enable Container insights on an existing [AKS cluster](container-insights-enable-aks.md) or [Azure Arc-enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md).
+- Enable for an existing [AKS cluster](../essentials/prometheus-metrics-enable.md) or [Arc-enabled Kubernetes cluster (preview)](../essentials/prometheus-metrics-from-arc-enabled-cluster.md).
-[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are popular CNCF-backed open-source tools for Kubernetes monitoring. AKS exposes many metrics in Prometheus format, which makes Prometheus a popular choice for monitoring. [Container Insights](container-insights-overview.md) has native integration with AKS, like collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics. Many native Azure Monitor insights are built on top of Prometheus metrics. Container Insights complements and completes E2E monitoring of AKS, including log collection, which Prometheus as stand-alone tool doesnΓÇÖt provide. You can use Prometheus integration and Azure Monitor together for E2E monitoring.
-To learn more about using Container Insights, see the [Container Insights overview](container-insights-overview.md). To learn more about features and monitoring scenarios of Container Insights, see [Monitor layers of AKS with Container Insights](#monitor-layers-of-aks-with-container-insights).
+If you already have a Prometheus environment that you want to use for your AKS clusters, then enable Azure Monitor managed service for Prometheus and then use remote-write to send data to your existing Prometheus environment. You can also [use remote-write to send data from your existing self-managed Prometheus environment to Azure Monitor managed service for Prometheus](../essentials/prometheus-remote-write.md).
+See [Default Prometheus metrics configuration in Azure Monitor](../essentials/prometheus-metrics-scrape-default.md) for details on the metrics that are collected by default and their frequency of collection. If you want to customize the configuration, see [Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-scrape-configuration.md).
-## Configure monitoring
-The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor.
+#### Enable Grafana for analysis of Prometheus data
-### Create Log Analytics workspace
+[Create an instance of Managed Grafana](../../managed-grafan)
-You need at least one Log Analytics workspace to support Container Insights and to collect and analyze other telemetry about your AKS cluster. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../logs/cost-logs.md) for details.
+If you have an existing Grafana environment, then you can continue to use it and add Azure Monitor managed service for [Prometheus as a data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). You can also [add the Azure Monitor data source to Grafana](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to use data collected by Container insights in custom Grafana dashboards. Perform this configuration if you want to focus on Grafana dashboards rather than using the Container insights views and reports.
-If you're just getting started with Azure Monitor, we recommend starting with a single workspace and creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../vm/monitor-virtual-machine-security.md), although it's common to segregate availability and performance telemetry from security data.
+A variety of prebuilt dashboards are available for monitoring Kubernetes clusters including several that present similar information as Container insights views. [Search the available Grafana dashboards templates](https://grafana.com/grafan).
-For information on design considerations for a workspace configuration, see [Designing your Azure Monitor Logs deployment](../logs/workspace-design.md).
-### Enable Container Insights
+#### Enable Container Insights for collection of logs
-When you enable Container Insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../agents/log-analytics-agent.md) that sends data to Azure Monitor. For prerequisites and configuration options, see [Enable Container Insights](container-insights-onboard.md).
+When you enable Container Insights for your Kubernetes cluster, it deploys a containerized version of the [Azure Monitor agent](../agents/..//agents/log-analytics-agent.md) that sends data to a Log Analytics workspace in Azure Monitor. Container insights collects container stdout/stderr, infrastructure logs, and performance data. All log data is stored in a Log Analytics workspace where they can be analyzed using [Kusto Query Language (KQL)](../logs/log-query-overview.md).
-### Configure collection from Prometheus
+See [Enable Container insights](../containers/container-insights-onboard.md) for prerequisites and configuration options for onboarding your Kubernetes clusters. [Onboard using Azure Policy](container-insights-enable-aks-policy.md) to ensure that all clusters retain a consistent configuration.
-Container Insights allows you to send Prometheus metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) or to your Log Analytics workspace without requiring a local Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container Insights. For details on this configuration, see [Collect Prometheus metrics with Container Insights](container-insights-prometheus.md).
+Once Container insights is enabled for a cluster, perform the following actions to optimize your installation.
-### Collect resource logs
+- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logging-v2.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md).
+- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. See [Enable cost optimization settings in Container insights (preview)](../containers/container-insights-cost-config.md) for details.
-The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](container-insights-log-query.md#resource-logs).
+If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
-You need to create a diagnostic setting to collect resource logs. You can create multiple diagnostic settings to send different sets of logs to different locations. To create diagnostic settings for your AKS cluster, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).
-There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
+#### Collect control plane logs for AKS clusters
+
+The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](../../aks/monitor-aks.md#resource-logs).
+
+[Create a diagnostic setting](../../aks/monitor-aks.md#resource-logs) for each AKS cluster to send resource logs to a Log Analytics workspace. Use [Azure Policy](../essentials/diagnostic-settings-policy.md) to ensure consistent configuration across multiple clusters.
+
+There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information for compliance reasons. For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
+
+If you're unsure which resource logs to initially enable, use the following recommendations, which are based on the most common customer requirements. You can enable other categories later if you need to.
-If you're unsure which resource logs to initially enable, use the following recommendations:
| Category | Enable? | Destination | |:|:|:|
-| cluster-autoscaler | Enable if autoscale is enabled | Log Analytics workspace |
-| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace |
| kube-apiserver | Enable | Log Analytics workspace | | kube-audit | Enable | Azure storage. This keeps costs to a minimum yet retains the audit logs if they're required by an auditor. | | kube-audit-admin | Enable | Log Analytics workspace | | kube-controller-manager | Enable | Log Analytics workspace | | kube-scheduler | Disable | |
-| AllMetrics | Enable | Log Analytics workspace |
-
-The recommendations are based on the most common customer requirements. You can enable other categories later if you need to.
-
-## Access Azure Monitor features
-
-Access Azure Monitor features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal, or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu. The following image shows the **Monitoring** menu for your AKS cluster:
+| cluster-autoscaler | Enable if autoscale is enabled | Log Analytics workspace |
+| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace |
+| AllMetrics | Disable since metrics are collected in Managed Prometheus | Log Analytics workspace |
-| Menu option | Description |
-|:|:|
-| Insights | Opens Container Insights for the current cluster. Select **Containers** from the **Monitor** menu to open Container Insights for all clusters. |
-| Alerts | View alerts for the current cluster. |
-| Metrics | Open metrics explorer with the scope set to the current cluster. |
-| Diagnostic settings | Create diagnostic settings for the cluster to collect resource logs. |
-| Advisor | Recommendations for the current cluster from Azure Advisor. |
-| Logs | Open Log Analytics with the scope set to the current cluster to analyze log data and access prebuilt queries. |
-| Workbooks | Open workbook gallery for Kubernetes service. |
+If you have an existing solution for collection of logs, either follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to Azure event hub to forward to alternate system.
-## Monitor layers of AKS with Container Insights
+#### Collect Activity log for AKS clusters
+Configuration changes to your AKS clusters are stored in the [Activity log](../essentials/activity-log.md). [Create a diagnostic setting to send this data to your Log Analytics workspace](../essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with other monitoring data. There's no cost for this data collection, and you can analyze or alert on the data using Log Analytics.
-Your monitoring approach should be based on your unique workload requirements, and factors such as scale, topology, organizational roles, and multi-cluster tenancy. This section presents a common bottoms-up approach, starting from infrastructure up through applications. Each layer has distinct monitoring requirements.
+### Monitor level 2 - Cluster level components
-### Level 1 - Cluster level components
-
-The cluster level includes the following component:
+The cluster level includes the following components:
| Component | Monitoring requirements | |:|:| | Node | Understand the readiness status and performance of CPU, memory, disk and IP usage for each node and proactively monitor their usage trends before deploying any workloads. |
-Use existing views and reports in Container Insights to monitor cluster level components.
+Following are common scenarios for monitoring the cluster level components.
+**Container insights**<br>
- Use the **Cluster** view to see the performance of the nodes in your cluster, including CPU and memory utilization. - Use the **Nodes** view to see the health of each node and the health and performance of the pods running on them. For more information on analyzing node health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md). - Under **Reports**, use the **Node Monitoring** workbooks to analyze disk capacity, disk IO, and GPU usage. For more information about these workbooks, see [Node Monitoring workbooks](container-insights-reports.md#node-monitoring-workbooks).
+- Under **Monitoring**, select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.
+
+**Network observability (east-west traffic)**
+- For AKS clusters, use the [Network Observability add-on for AKS (preview)](https://aka.ms/NetObsAddonDoc) to monitor and observe access between services in the cluster (east-west traffic).
- :::image type="content" source="media/monitor-kubernetes/container-insights-cluster-view.png" alt-text="Screenshot of Container Insights cluster view." lightbox="media/monitor-kubernetes/container-insights-cluster-view.png":::
+**Grafana dashboards**<br>
+- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus.
+- Use Grafana dashboards with [Prometheus metric values](../essentials/prometheus-metrics-scrape-default.md) related to disk such as `node_disk_io_time_seconds_total` and `windows_logical_disk_free_bytes` to monitor attached storage.
-- Under **Monitoring**, you can select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.
+**Log Analytics**
+- Select the [Containers category](../logs/queries.md?tabs=groupby#find-and-filter-queries) in the [queries dialog](../logs/queries.md#queries-dialog) for your Log Analytics workspace to access prebuilt log queries for your cluster, including the **Image inventory** log query that retrieves data from the [ContainerImageInventory](/azure/azure-monitor/reference/tables/containerimageinventory) table populated by Container insights.
- :::image type="content" source="media/monitor-kubernetes/monitoring-workbooks-subnet-ip-usage.png" alt-text="Screenshot of Container Insights workbooks." lightbox="media/monitor-kubernetes/monitoring-workbooks-subnet-ip-usage.png":::
+**Troubleshooting**<br>
+- For troubleshooting scenarios, you may need to access nodes directly for maintenance or immediate log collection. For security purposes, AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](../../aks/ssh.md).
-For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](../../aks/ssh.md).
+**Cost analysis**<br>
+- Configure [OpenCost](https://www.opencost.io), which is an open-source, vendor-neutral CNCF sandbox project for understanding your Kubernetes costs, to support your analysis of your cluster costs. It exports detailed costing data to Azure storage.
+- Use data from OpenCost to breakdown relative usage of the cluster by different teams in your organization so that you can allocate the cost between each.
+- Use data from OpenCost to ensure that the cluster is using the full capacity of its nodes by densely packing workloads, using fewer large nodes as opposed to many smaller nodes.
-### Level 2 - Managed AKS components
-The managed AKS level includes the following components:
+### Monitor level 3 - Managed Kubernetes components
+
+The managed Kubernetes level includes the following components:
| Component | Monitoring | |:|:| | API Server | Monitor the status of API server and identify any increase in request load and bottlenecks if the service is down. | | Kubelet | Monitor Kubelet to help troubleshoot pod management issues, pods not starting, nodes not ready, or pods getting killed. |
-Azure Monitor and Container Insights don't provide full monitoring for the API server.
+Following are common scenarios for monitoring your managed Kubernetes components.
+
+**Container insights**<br>
+- Under **Monitoring**, select **Metrics** to view the **Inflight Requests** counter.
+- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks).
-- Under **Monitoring**, you can select **Metrics** to view the **Inflight Requests** counter, but you should refer to metrics in Prometheus for a complete view of the API server performance. This includes such values as request latency and workqueue processing time.-- To see critical metrics for the API server, see [Grafana Labs](https://grafana.com/grafan).
+**Grafana**<br>
+- Use a dashboard such as [Kubernetes apiserver](https://grafana.com/grafana/dashboards/12006) for a complete view of the API server performance. This includes such values as request latency and workqueue processing time.
- :::image type="content" source="media/monitor-kubernetes/grafana-api-server.png" alt-text="Screenshot of dashboard for Grafana API server." lightbox="media/monitor-kubernetes/grafana-api-server.png":::
+**Log Analytics**<br>
+- Use [log queries with resource logs](../../aks/monitor-aks.md#sample-log-queries) to analyze [control plane logs](#collect-control-plane-logs-for-aks-clusters) generated by AKS components.
+- Any configuration activities for AKS are logged in the Activity log. When you [send the Activity log to a Log Analytics workspace](#collect-activity-log-for-aks-clusters) you can analyze it with Log Analytics. For example, the following sample query can be used to return records identifying a successful upgrade across all your AKS clusters.
-- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks). For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](../../aks/kubelet-logs.md).
+ ``` kql
+ AzureActivity
+ | where CategoryValue == "Administrative"
+ | where OperationNameValue == "MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/WRITE"
+ | extend properties=parse_json(Properties_d)
+ | where properties.message == "Upgrade Succeeded"
+ | order by TimeGenerated desc
+ ```
-### Resource logs
-Use [log queries with resource logs](container-insights-log-query.md#resource-logs) to analyze control plane logs generated by AKS components.
+**Troubleshooting**<br>
+- For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](../../aks/kubelet-logs.md).
-### Level 3 - Kubernetes objects and workloads
+
+### Monitor level 4 - Kubernetes objects and workloads
The Kubernetes objects and workloads level includes the following components:
The Kubernetes objects and workloads level includes the following components:
| Pods | Monitor status and resource utilization, including CPU and memory, of the pods running on your AKS cluster. | | Containers | Monitor resource utilization, including CPU and memory, of the containers running on your AKS cluster. |
-Use existing views and reports in Container Insights to monitor containers and pods.
+Following are common scenarios for monitoring your Kubernetes objects and workloads.
+
+**Container insights**<br>
- Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers. - Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md#analyze-nodes-controllers-and-container-health).
+- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, see [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md).
- :::image type="content" source="media/monitor-kubernetes/container-insights-containers-view.png" alt-text="Screenshot of Container Insights containers view." lightbox="media/monitor-kubernetes/container-insights-containers-view.png":::
+**Grafana dashboards**<br>
+- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus.
-- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, ee [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md).
- :::image type="content" source="media/monitor-kubernetes/container-insights-deployments-workbook.png" alt-text="Screenshot of Container Insights deployments workbook." lightbox="media/monitor-kubernetes/container-insights-deployments-workbook.png":::
+**Live data**
+- In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md).
-#### Live data
-In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md).
+### Alerts for the platform engineer
+[Alerts in Azure Monitor](..//alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. If you have an existing [ITSM solution](../alerts/itsmc-overview.md) for alerting, you can [integrate it with Azure Monitor](../alerts/itsmc-overview.md). You can also [export workspace data](../logs/logs-data-export.md) to send data from your Log Analytics workspace to another location that supports your current alerting solution.
-### Level 4 - Applications
+#### Alert types
+The following table describes the different types of custom alert rules that you can create based on the data collected by the services described above.
-The application level includes the following component:
-
-| Component | Monitoring requirements |
-|:|:|
-| Applications | Monitor microservice application deployments to identify application failures and latency issues, including information like request rates, response times, and exceptions. |
-
-Application Insights provides complete monitoring of applications running on AKS and other environments. If you have a Java application, you can provide monitoring without instrumenting your code by following [Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights](../app/kubernetes-codeless.md).
-
-If you want complete monitoring, you should configure code-based monitoring depending on your application:
--- [ASP.NET applications](../app/asp-net.md)-- [ASP.NET Core applications](../app/asp-net-core.md)-- [.NET Console applications](../app/console.md)-- [Java](../app/opentelemetry-enable.md?tabs=java)-- [Node.js](../app/nodejs.md)-- [Python](../app/opencensus-python.md)-- [Other platforms](../app/app-insights-overview.md#supported-languages)-
-For more information, see [What is Application Insights?](../app/app-insights-overview.md).
-
-### Level 5 - External components
-
-The components external to AKS include the following:
-
-| Component | Monitoring requirements |
+| Alert type | Description |
|:|:|
-| Service Mesh, Ingress, Egress | Metrics based on component. |
-| Database and work queues | Metrics based on component. |
-
-Monitor external components such as Service Mesh, Ingress, Egress with Prometheus and Grafana, or other proprietary tools. Monitor databases and other Azure resources using other features of Azure Monitor.
-
-## Analyze metric data with the Metrics explorer
-
-Use the **Metrics** explorer to perform custom analysis of metric data collected for your containers. It allows you plot charts, visually correlate trends, and investigate spikes and dips in your metrics values. You can create metrics alert to proactively notify you when a metric value crosses a threshold and pin charts to dashboards for use by different members of your organization.
+| Prometheus alerts | [Prometheus alerts](../alerts/prometheus-alerts.md) are written in Prometheus Query Language (Prom QL) and applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Recommended alerts already include the most common Prometheus alerts, and you can [create addition alert rules](../essentials/prometheus-rule-groups.md) as required. |
+| Metric alert rules | Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. Metric alert rules can be useful to alert on AKS performance using any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). |
+| Log alert rules | Use log alert rules to generate an alert from the results of a log query. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). |
-For more information, see [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md). For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). When Container Insights is enabled for a cluster, [addition metric values](container-insights-update-metrics.md) are available.
+#### Recommended alerts
+Start with a set of recommended Prometheus alerts from [Metric alert rules in Container insights (preview)](container-insights-metric-alerts.md#prometheus-alert-rules) which include the most common alerting conditions for a Kubernetes cluster. You can add more alert rules later as you identify additional alerting conditions.
+## Developer
-## Analyze log data with Log Analytics
+In addition to developing the application, the *developer* maintains the application running on the cluster. They're responsible for application specific traffic including application performance and failures and maintain reliability of the application according to company-defined SLAs.
-Select **Logs** to use the Log Analytics tool to analyze resource logs or dig deeper into data used to create the views in Container Insights. Log Analytics allows you to perform custom analysis of your log data.
-For more information on Log Analytics and to get started with it, see:
+### Azure services for developer
-- [How to query logs from Container Insights](container-insights-log-query.md)-- [Using queries in Azure Monitor Log Analytics](../logs/queries.md)-- [Monitoring AKS data reference logs](../../aks/monitor-aks-reference.md#azure-monitor-logs-tables)-- [Log Analytics tutorial](../logs/log-analytics-tutorial.md)
+The following table lists the services that are commonly used by the network engineer to monitor the health and performance of the Kubernetes cluster and its components.
-You can also use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](../../aks/monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before the data can be collected.
-## Alerts
-
-[Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. There are no preconfigured alert rules for AKS clusters, but you can create your own based on data collected by Container Insights.
-
-> [!IMPORTANT]
-> Most alert rules have a cost dependent on the type of rule, how many dimensions it includes, and how frequently it runs. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before creating any alert rules.
+| Service | Description |
+|:|:|
+| [Application insights](../app/app-insights-overview.md) | Feature of Azure Monitor that provides application performance monitoring (APM) to monitor applications running on your Kubernetes cluster from development, through test, and into production. Quickly identify and mitigate latency and reliability issues using distributed traces. Supports [OpenTelemetry](../app/opentelemetry-overview.md#opentelemetry) for vendor-neutral instrumentation. |
-### Choose an alert type
-The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario will depend on where the data is located that you want to set an alert for.
+See [Data Collection Basics of Azure Monitor Application Insights](../app/opentelemetry-overview.md) for options on configuring data collection from the application running on your cluster and decision criteria on the best method for your particular requirements.
-You may have cases where data for a particular alerting scenario is available in both **Metrics** and **Logs**, and you need to determine which rule type to use. It's typically the best strategy to use metric alerts instead of log alerts when possible, because metric alerts are more responsive and stateful. You can create a metric alert on any values you can analyze in the Metrics explorer. If the logic for your alert rule requires data in **Logs**, or if it requires more complex logic, then you can use a log query alert rule.
+### Monitor level 5 - Application
-For example, if you want an alert when an application workload is consuming excessive CPU, you can create a metric alert using the CPU metric. If you need an alert when a particular message is found in a control plane log, then you'll require a log alert.
+Following are common scenarios for monitoring your application.
-### Metric alert rules
-Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. You can use any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics) for metric alert rules.
+**Application performance**<br>
+- Use the **Performance** view in Application insights to view the performance of different operations in your application.
+- Use [Profiler](../profiler/profiler-overview.md) to capture and view performance traces for your application.
+- Use [Application Map](../app/app-map.md) to view the dependencies between your application components and identify any bottlenecks.
+- Enable [distributed tracing](../app/distributed-tracing-telemetry-correlation.md), which provides a performance profiler that works like call stacks for cloud and microservices architectures, to gain better observability into the interaction between services.
-Container Insights includes a feature that creates a recommended set of metric alert rules for your AKS cluster. This feature creates new metric values used by the alert rules that you can also use in the Metrics explorer. For more information, see [Recommended metric alerts (preview) from Container Insights](container-insights-metric-alerts.md).
+**Application failures**<br>
+- Use the **Failures** tab of Application insights to view the number of failed requests and the most common exceptions.
+- Ensure that alerts for [failure anomalies](../alerts/proactive-failure-diagnostics.md) identified with [smart detection](../alerts/proactive-diagnostics.md) are configured properly.
-### Log alert rules
+**Health monitoring**<br>
+- Create an [Availability test](../app/availability-overview.md) in Application insights to create a recurring test to monitor the availability and responsiveness of your application.
+- Use the [SLA report](../app/sla-report.md) to calculate and report SLA for web tests.
+- Use [annotations](../app/annotations.md) to identify when a new build is deployed so that you can visually inspect any change in performance after the update.
-Use log alert rules to generate an alert from the results of a log query. This may be data collected by Container Insights or from AKS resource logs. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md).
+**Application logs**<br>
+- Container insights sends stdout/stderr logs to a Log Analytics workspace. See [Resource logs](../../aks/monitor-aks-reference.md#resource-logs) for a description of the different logs and [Kubernetes Services](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of the tables each is sent to.
-### Virtual machine alerts
+**Service mesh**
-AKS relies on a Virtual Machine Scale Set that must be healthy to run AKS workloads. You can alert on critical metrics such as CPU, memory, and storage for the virtual machines using the guidance at [Monitor virtual machines with Azure Monitor: Alerts](../vm/monitor-virtual-machine-alerts.md).
+- For AKS clusters, deploy the [Istio-based service mesh add-on](../../aks/istio-about.md) which provides observability to your microservices architecture. [Istio](https://istio.io/) is an open-source service mesh that layers transparently onto existing distributed applications. The add-on assists in the deployment and management of Istio for AKS.
-### Prometheus alerts
-You can configure Prometheus alerts to cover scenarios where Azure Monitor either doesn't have the data required for an alerting condition or the alerting may not be responsive enough. For example, Azure Monitor doesn't collect critical information for the API server. You can create a log query alert using the data from the kube-apiserver resource log category, but it can take up to several minutes before you receive an alert, which may not be sufficient for your requirements. In this case, we recommend configuring Prometheus alerts.
+## See also
-## Next steps
+- See [Monitoring AKS](../../aks/monitor-aks.md) for guidance on monitoring specific to Azure Kubernetes Service (AKS).
-- For more information about AKS metrics, logs, and other important values, see [Monitoring AKS data reference](../../aks/monitor-aks-reference.md).
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Previously updated : 07/27/2022 Last updated : 08/16/2023 #Customer intent: As a dev-ops administrator I want to migrate my retention setting from diagnostic setting retention storage to Azure Storage lifecycle management so that it continues to work after the feature has been deprecated.
This guide walks you through migrating from using Azure diagnostic settings stor
> [!IMPORTANT] > **Deprecation Timeline.**
-> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them.
-> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
+> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. This includes using the portal, CLI PowerShell, and ARM and Bicep templates. If you have configured retention settings, you'll still be able to see and change them in the portal.
+> - September 30, 2023 ΓÇô You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
> - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments.
To migrate your diagnostics settings retention rules, follow the steps below:
1. Set your retention time, then select **Next** :::image type="content" source="./media/retention-migration/lifecycle-management-add-rule-base-blobs.png" alt-text="A screenshot showing the Base blobs tab for adding a lifecycle rule.":::
-1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to.
-For example, for all Function App logs, you could use the container *insights-logs-functionapplogs* to set the retention for all Function App logs.
-To set the rule for a specific subscription, resource group, and function app name, use *insights-logs-functionapplogs/resourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your function app name\>*.
+1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. The path or prefix can be at any level within the container and will apply to all blobs under that path or prefix.
+For example, for *all* insight activity logs, use the container *insights-activity-logs* to set the retention for all of the log in that container logs.
+To set the rule for a specific webapp app, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your webapp name\>*.
+
+ Use the Storage browser to help you find the path or prefix.
+ The example below shows the prefix for a specific web app: **insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001/PROVIDERS/MICROSOFT.WEB/SITES/appfromdocker1*.
+ To set the rule for all resources in the resource group, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001*.
+ :::image type="content" source="./media/retention-migration/blob-prefix.png" alt-text="A screenshot showing the Storage browser and resource path." lightbox="./media/retention-migration/blob-prefix.png":::
1. Select **Add** to save the rule. ## Next steps
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
Last updated 09/28/2022
# Azure Monitor managed service for Prometheus rule groups Rules in Prometheus act on data as it's collected. They're configured as part of a Prometheus rule group, which is stored in [Azure Monitor workspace](azure-monitor-workspace-overview.md). Rules are run sequentially in the order they're defined in the group. - ## Rule types There are two types of Prometheus rules as described in the following table.
There are two types of Prometheus rules as described in the following table.
| Recording |[Recording rules](https://aka.ms/azureprometheus-promio-recrules) allow you to precompute frequently needed or computationally extensive expressions and store their result as a new set of time series. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. | ## Create Prometheus rules
-Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties.Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell.
+Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties. Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell.
+
+Azure managed Prometheus rule groups follow the structure and terminology of the open source Prometheus rule groups. Rule names, expression, 'for' clause, labels, annotations are all supported in the Azure version. The following key differences between OSS rule groups and Azure managed Prometheus should be noted:
+* Azure managed Prometheus rule groups are managed as Azure resources, and include necessary information for resource management, such as the subscription and resource group where the Azure rule group should reside.
+* Azure managed Prometheus alert rules include dedicated properties that allow alerts to be processed like other Azure Monitor alerts. For example, alert severity, action group association, and alert auto resolve configuration are supported as part of Azure managed Prometheus alert rules.
> [!NOTE] > For your AKS or ARC Kubernetes clusters, you can use some of the recommended alerts rules. See pre-defined alert rules [here](../containers/container-insights-metric-alerts.md#enable-prometheus-alert-rules). ### Limiting rules to a specific cluster
-You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property.
-You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, and therefore limit each group to cover a different cluster.
+You can optionally limit the rules in a rule group to query data originating from a single specific cluster, by adding a cluster scope to your rule group, and/or by using the rule group `clusterName` property.
+You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the cluster scope, you can create multiple rule groups, each configured with the same rules, with each group covering a different cluster.
+
+To limit your rule group to a cluster scope, you should add the Azure Resource ID of your cluster to the rule group **scopes[]** list. **The scopes list must still include the Azure Monitor workspace resource ID**. The following cluster resource types are supported as a cluster scope:
+* Azure Kubernetes Service clusters (AKS) (Microsoft.ContainerService/managedClusters)
+* Azure Arc-enabled Kubernetes clusters (Microsoft.kubernetes/connectedClusters)
+* Azure connected appliances (Microsoft.ResourceConnector/appliances)
+
+In addition to the cluster ID, you can configure the **clusterName** property of your rule group. The 'clusterName' property must match the `cluster` label that is added to your metrics when scraped from a specific cluster. By default, this label is set to the last part (resource name) of your cluster ID. If you've changed this label using the ['cluster_alias'](../essentials/prometheus-metrics-scrape-configuration.md#cluster-alias) setting in your cluster scraping configmap, you must include the updated value in the rule group 'clusterName' property. If your scraping uses the default 'cluster' label value, the 'clusterName' property is optional.
+
+Here's an example of how a rule group is configured to limit query to a specific cluster:
-- The `clusterName` value must be identical to the `cluster` label that is added to the metrics from a specific cluster during data collection.-- If `clusterName` isn't specified for a specific rule group, the rules in the group query all the data in the workspace from all clusters.
+``` json
+{
+ "name": "sampleRuleGroup",
+ "type": "Microsoft.AlertsManagement/prometheusRuleGroups",
+ "apiVersion": "2023-03-01",
+ "location": "northcentralus",
+ "properties": {
+ "description": "Sample Prometheus Rule Group limited to a specific cluster",
+ "scopes": [
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>",
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.containerservice/managedclusters/<myClusterName>"
+ ],
+ "clusterName": "<myCLusterName>",
+ "rules": [
+ {
+ ...
+ }
+ ]
+ }
+}
+```
+If both cluster ID scope and `clusterName` aren't specified for a rule group, the rules in the group query data from all the clusters in the workspace from all clusters.
### Creating Prometheus rule group using Resource Manager template
The basic steps are as follows:
2. Deploy the template using any deployment method, such as [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md), or [Rest API](../../azure-resource-manager/templates/deploy-rest.md). ### Template example for a Prometheus rule group
-Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The rules are executed in the order they appear within a group.
+Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The scope of this group is limited to a single AKS cluster. The rules are executed in the order they appear within a group.
``` json {
Following is a sample template that creates a Prometheus rule group, including o
"properties": { "description": "Sample Prometheus Rule Group", "scopes": [
- "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>"
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>",
+ "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.containerservice/managedclusters/<myClusterName>"
], "enabled": true, "clusterName": "<myCLusterName>",
Following is a sample template that creates a Prometheus rule group, including o
}, "actions": [ {
- "actionGroupId": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>"
+ "actionGroupID": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>"
} ] }
The rule group contains the following properties.
| `name` | True | string | Prometheus rule group name | | `type` | True | string | `Microsoft.AlertsManagement/prometheusRuleGroups` | | `apiVersion` | True | string | `2023-03-01` |
-| `location` | True | string | Resource location from regions supported in the preview |
-| `properties.description` | False | string | Rule group description |
-| `properties.scopes` | True | string[] | Target Azure Monitor workspace. Only one scope currently supported |
+| `location` | True | string | Resource location from regions supported in the preview. |
+| `properties.description` | False | string | Rule group description. |
+| `properties.scopes` | True | string[] | Must include the target Azure Monitor workspace ID. Can optionally include one more cluster ID, as well. |
| `properties.enabled` | False | boolean | Enable/disable group. Default is true. |
-| `properties.clusterName` | False | string | Apply rule to data from a specific cluster. Default is apply to all data in workspace. |
+| `properties.clusterName` | False | string | Must match the `cluster` label that is added to metrics scraped from your target cluster. By default, set to the last part (resource name) of cluster ID that appears in scopes[]. |
| `properties.interval` | False | string | Group evaluation interval. Default = PT1M | ### Recording rules
The `rules` section contains the following properties for alerting rules.
|:|:|:|:|:| | `alert` | False | string | Alert rule name | | `expression` | True | string | PromQL expression to evaluate. |
-| `for` | False | string | Alert firing timeout. Values - 'PT1M', 'PT5M' etc. |
+| `for` | False | string | Alert firing timeout. Values - PT1M, PT5M etc. |
| `labels` | False | object | labels key-value pairs | Prometheus alert rule labels. These labels are added to alerts fired by this rule. | | `rules.annotations` | False | object | Annotations key-value pairs to add to the alert. | | `enabled` | False | boolean | Enable/disable group. Default is true. |
The `rules` section contains the following properties for alerting rules.
If you have a [Prometheus rules configuration file](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#configuring-rules) (in YAML format), you can now convert it to an Azure Prometheus rule group ARM template, using the [az-prom-rules-converter utility](https://github.com/Azure/prometheus-collector/tree/main/tools/az-prom-rules-converter#az-prom-rules-converter). The rules file can contain definition of one or more rule groups.
-In addition to the rules file, you can provide the utility with additional properties that are needed to create the Azure Prometheus rule groups, including: subscription, resource group, location, target Azure Monitor workspace, target cluster name, and action groups (used for alert rules). The utility creates a template file that can be deployed directly or within a deployment pipe providing some of these properties as parameters. Note that properties provided to the utility are used for all the rule groups in the template, e.g., all rule groups in the file will be created in the same subscription/resource group/location, using the same Azure Monitor workspace, etc. If an action group is provided as a parameter to the utility, the same action group will be used in all the alert rules in the template. If you want to change this default configuration (e.g., use different action groups in different rules) you can edit the resulting template according to your needs, before deploying it.
+In addition to the rules file, you must provide the utility with other properties that are needed to create the Azure Prometheus rule groups, including: subscription, resource group, location, target Azure Monitor workspace, target cluster ID and name, and action groups (used for alert rules). The utility creates a template file that can be deployed directly or within a deployment pipe providing some of these properties as parameters. Properties that you provide to the utility are used for all the rule groups in the template. For example, all rule groups in the file are created in the same subscription, resource group and location, and using the same Azure Monitor workspace. If an action group is provided as a parameter to the utility, the same action group is used in all the alert rules in the template. If you want to change this default configuration (for example, use different action groups in different rules) you can edit the resulting template according to your needs, before deploying it.
> [!NOTE]
-> !The az-prom-convert-utility is provided as a courtesy tool. We recommend that you review the resulting template and verify it matches your intended configuration.
+> The az-prom-convert-utility is provided as a courtesy tool. We recommend that you review the resulting template and verify it matches your intended configuration.
### Creating Prometheus rule group using Azure CLI
To enable or disable a rule, select the rule in the Azure portal. Select either
> After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group. + ## Next steps - [Learn more about the Azure alerts](../alerts/alerts-types.md).
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Subscriptions that contained a Log Analytics workspace or Application Insights r
Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing).
+> [!IMPORTANT]
+> The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data as cost-effective Basic Logs.
+ ### Free Trial pricing tier
-Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). The data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free Trial tier.
+Workspaces in the Free Trial pricing tier have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). Data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes, not production workloads. No SLA is provided for the Free Trial tier.
> [!NOTE] > Creating new workspaces in, or moving existing workspaces into, the legacy Free Trial pricing tier was possible only until July 1, 2022.
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
Last updated 03/20/2023
The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE]
-> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components.
+> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components.
The steps required to configure the Logs ingestion API are as follows:
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
ms.assetid:
na-+ Last updated 03/08/2023
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [SAP S/4HANA in Linux on Azure - Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-s4hana) * [Run SAP BW/4HANA with Linux VMs - Azure Architecture Center](/azure/architecture/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines) * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md)
+* [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300)
* [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417) * [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md)
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
na Previously updated : 02/23/2023 Last updated : 08/15/2023 # Requirements and considerations for Azure NetApp Files backup
Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
* Policy-based (scheduled) Azure NetApp Files backup is independent from [snapshot policy configuration](azure-netapp-files-manage-snapshots.md).
-* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume.
+* In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a CRR or CZR *destination* volume.
* See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups.
azure-netapp-files Cross Zone Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md
na Previously updated : 05/19/2023 Last updated : 08/18/2023 # Requirements and considerations for using cross-zone replication
This article describes requirements and considerations about [using the volume c
* After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until you delete the replication relationship and volume. * You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens. * You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after you've deleted replication relationship. You cannot delete manual snapshots for the destination volume until you break the replication relationship.
-* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is unavailable out for volumes in a replication relationship.
+* When reverting a source volume with an active volume replication relationship, only snapshots that are more recent than the SnapMirror snapshot can be used in the revert operation. For more information, see [Revert a volume using snapshot revert with Azure NetApp Files](snapshots-revert-volume.md).
* Data replication volumes support [customer-managed keys](configure-customer-managed-keys.md). * You can't currently use cross-zone replication with [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) (larger than 100 TiB).
azure-netapp-files Faq Application Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md
The scale-out architecture would be comprised of multiple IBM MQ multi-instance
## I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the *NFS* protocol?
->[!NOTE]
-> This section contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- If you're running the Apache ActiveMQ, it's recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq).
-ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the "slave" isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover.
+ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the replica isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover.
Because most problems with this HA solution stem from inaccurate OS-level file locking, the ActiveMQ community [introduced the concept of a pluggable storage locker](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq) in version 5.7 of the broker. This approach allows a user to take advantage of a different means of the shared lock, using a row-level JDBC database lock as opposed to an OS-level filesystem lock. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us).
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Title: Manage resource groups - Azure portal description: Use Azure portal to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups.- Previously updated : 03/26/2019- Last updated : 08/16/2023 # Manage Azure resource groups by using the Azure portal
The resource group stores metadata about the resources. Therefore, when you spec
## Create resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups**
+1. Select **Resource groups**.
+1. Select **Create**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted.":::
-3. Select **Add**.
-4. Enter the following values:
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png":::
- - **Subscription**: Select your Azure subscription.
- - **Resource group**: Enter a new resource group name.
+1. Enter the following values:
+
+ - **Subscription**: Select your Azure subscription.
+ - **Resource group**: Enter a new resource group name.
- **Region**: Select an Azure location, such as **Central US**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region.":::
-5. Select **Review + Create**
-6. Select **Create**. It takes a few seconds to create a resource group.
-7. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png":::
+1. Select **Review + Create**
+1. Select **Create**. It takes a few seconds to create a resource group.
+1. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png":::
## List resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. To list the resource groups, select **Resource groups**
-
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups.":::
+1. To list the resource groups, select **Resource groups**
+1. To customize the information displayed for the resource groups, configure the filters. The following screenshot shows the additional columns you could add to the display:
-3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display:
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png":::
## Open resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups**.
-3. Select the resource group you want to open.
+1. Select **Resource groups**.
+1. Select the resource group you want to open.
## Delete resource groups 1. Open the resource group you want to delete. See [Open resource groups](#open-resource-groups).
-2. Select **Delete resource group**.
+1. Select **Delete resource group**.
- :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group." lightbox="./media/manage-resource-groups-portal/delete-group.png":::
For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md).
You can move the resources in the group to another resource group. For more info
## Lock resource groups
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
1. Open the resource group you want to lock. See [Open resource groups](#open-resource-groups).
-2. In the left pane, select **Locks**.
-3. To add a lock to the resource group, select **Add**.
-4. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**.
+1. In the left pane, select **Locks**.
+1. To add a lock to the resource group, select **Add**.
+1. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png":::
For more information, see [Lock resources to prevent unexpected changes](lock-resources.md).
azure-resource-manager App Service Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md
Title: Move Azure App Service resources across resource groups or subscriptions
description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 03/31/2022 Last updated : 08/17/2023 # Move App Service resources to a new resource group or subscription
When you move a Web App across subscriptions, the following guidance applies:
- Uploaded or imported TLS/SSL certificates - App Service Environments - All App Service resources in the resource group must be moved together.-- App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment.
+- App Service Environments can't be moved to a new resource group or subscription.
+ - You can move a Web App and App Service plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The Web App and App Service plan that you move will always be associated with your initial App Service Environment. You can't move a Web App/App Service plan to a different App Service Environment.
+ - If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment.
- You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section.
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription.++ Last updated 04/24/2023
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 02/02/2023 Last updated : 08/15/2023 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
* automationAccounts
+## Microsoft.AzureArcData
+
+* SqlServerInstances
+ ## Microsoft.AzureStack * generateDeploymentLicense
Some resources have a limit on the number instances per region. This limit is di
* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+## Microsoft.Cdn
+
+* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+ ## Microsoft.Compute
+* diskEncryptionSets
* disks * galleries * galleries/images
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.DBforPostgreSQL * flexibleServers
-* serverGroups
* serverGroupsv2 * servers
-* serversv2
## Microsoft.DevTestLab
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.EdgeOrder
+* bootstrapConfigurations
* orderItems * orders
Some resources have a limit on the number instances per region. This limit is di
* clusters * namespaces
+## Microsoft.Fabric
+
+* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota
+ ## Microsoft.GuestConfiguration * guestConfigurationAssignments
Some resources have a limit on the number instances per region. This limit is di
* machines * machines/extensions
+* machines/runcommands
## Microsoft.Logic
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Network
-* applicationGatewayWebApplicationFirewallPolicies
* applicationSecurityGroups
-* bastionHosts
* customIpPrefixes * ddosProtectionPlans
-* dnsForwardingRulesets
-* dnsForwardingRulesets/forwardingRules
-* dnsForwardingRulesets/virtualNetworkLinks
-* dnsResolvers
-* dnsResolvers/inboundEndpoints
-* dnsResolvers/outboundEndpoints
-* dnszones
-* dnszones/A
-* dnszones/AAAA
-* dnszones/all
-* dnszones/CAA
-* dnszones/CNAME
-* dnszones/MX
-* dnszones/NS
-* dnszones/PTR
-* dnszones/recordsets
-* dnszones/SOA
-* dnszones/SRV
-* dnszones/TXT
-* expressRouteCrossConnections
* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit * networkIntentPolicies * networkInterfaces * networkSecurityGroups
-* privateDnsZones
-* privateDnsZones/A
-* privateDnsZones/AAAA
-* privateDnsZones/all
-* privateDnsZones/CNAME
-* privateDnsZones/MX
-* privateDnsZones/PTR
-* privateDnsZones/SOA
-* privateDnsZones/SRV
-* privateDnsZones/TXT
-* privateDnsZones/virtualNetworkLinks
* privateEndpointRedirectMaps * privateEndpoints * privateLinkServices * publicIPAddresses * serviceEndpointPolicies
-* trafficmanagerprofiles
-* virtualNetworks/privateDnsZoneLinks
* virtualNetworkTaps
+## Microsoft.NetworkCloud
+
+* volumes
+
+## Microsoft.NetworkFunction
+
+* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess
+ ## Microsoft.NotificationHubs * namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
Some resources have a limit on the number instances per region. This limit is di
* assignments * securityConnectors
+* securityConnectors/devops
## Microsoft.ServiceBus
Some resources have a limit on the number instances per region. This limit is di
* accounts/jobs * accounts/models * accounts/networks
+* accounts/secrets
* accounts/storageContainers ## Microsoft.Sql
Some resources have a limit on the number instances per region. This limit is di
* storageAccounts
-## Microsoft.StoragePool
-
-* diskPools
-* diskPools/iscsiTargets
- ## Microsoft.StreamAnalytics * streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Web * apiManagementAccounts/apis
+* certificates - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Web/DisableResourcesPerRGLimitForAPIMinWebApp
* sites ## Next steps
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
The connection string contains:
The following table lists all the valid names for key/value pairs in the connection string.
-| Key | Description | Required | Default value| Example value
-| | | | | |
-| Endpoint | The URL of your ASRS instance. | Y | N/A |`https://foo.service.signalr.net` |
-| Port | The port that your ASRS instance is listening on. on. | N| 80/443, depends on the endpoint URI schema | 8080|
-| Version| The version of given connection. string. | N| 1.0 | 1.0 |
-| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N| null | `https://foo.bar` |
-| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app |
+| Key | Description | Required | Default value | Example value |
+| -- | - | -- | | |
+| Endpoint | The URL of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` |
+| Port | The port that your ASRS instance is listening on. on. | N | 80/443, depends on the endpoint URI schema | 8080 |
+| Version | The version of given connection. string. | N | 1.0 | 1.0 |
+| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N | null | `https://foo.bar` |
+| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app |
### Use AccessKey The local auth method is used when `AuthType` is set to null.
-| Key | Description| Required | Default value | Example value|
-| | | | | |
-| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
+| Key | Description | Required | Default value | Example value |
+| | - | -- | - | - |
+| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
-### Use Azure Active Directory
+### Use Microsoft Entra ID
-The Azure AD auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
+The Microsoft Entra ID auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
-| Key| Description| Required | Default value | Example value|
-| -- | | -- | - | |
-| ClientId | A GUID of an Azure application or an Azure identity. | N| null| `00000000-0000-0000-0000-000000000000` |
-| TenantId | A GUID of an organization in Azure Active Directory. | N| null| `00000000-0000-0000-0000-000000000000` |
-| ClientSecret | The password of an Azure application instance. | N| null| `***********************.****************` |
-| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N| null| `/usr/local/cert/app.cert` |
+| Key | Description | Required | Default value | Example value |
+| -- | | -- | - | |
+| ClientId | A GUID of an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` |
+| TenantId | A GUID of an organization in Microsoft Entra ID. | N | null | `00000000-0000-0000-0000-000000000000` |
+| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` |
+| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` |
-A different `TokenCredential` is used to generate Azure AD tokens depending on the parameters you have given.
+A different `TokenCredential` is used to generate Microsoft Entra tokens depending on the parameters you have given.
- `type=azure`
A different `TokenCredential` is used to generate Azure AD tokens depending on t
1. A user-assigned managed identity is used if `clientId` has been given in connection string.
- ```
+ ```text
Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id> ```
-
+ - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used. 1. A system-assigned managed identity is used.
A different `TokenCredential` is used to generate Azure AD tokens depending on t
- `type=azure.app`
- `clientId` and `tenantId` are required to use [Azure AD application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
+ `clientId` and `tenantId` are required to use [Microsoft Entra application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) is used if `clientSecret` is given.
You can also use Azure CLI to get the connection string:
az signalr key list -g <resource_group> -n <resource_name> ```
-## Connect with an Azure AD application
+## Connect with a Microsoft Entra application
-You can use an [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
+You can use a [Microsoft Entra application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
-To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string looks as follows:
+To use Microsoft Entra authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Microsoft Entra application, including client ID, client secret and tenant ID. The connection string looks as follows:
```text Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0; ```
-For more information about how to authenticate using Azure AD application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md).
+For more information about how to authenticate using Microsoft Entra application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md).
## Authenticate with Managed identity
-You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
+You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
To use a system assigned identity, add `AuthType=azure.msi` to the connection string:
For more information about how to configure managed identity, see [Authorize fro
### Use the connection string generator
-It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu.
+It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Microsoft Entra identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu.
:::image type="content" source="media/concept-connection-string/generator.png" alt-text="Screenshot showing connection string generator of SignalR service in Azure portal.":::
-In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application.
+In this page you can choose different authentication types (access key, managed identity or Microsoft Entra application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application.
> [!NOTE] > Information you enter won't be saved after you leave the page. You will need to copy and save your connection string to use in your application.
-For more information about how access tokens are generated and validated, see [Authenticate via Azure Active Directory Token](signalr-reference-data-plane-rest-api.md#authenticate-via-azure-active-directory-token-azure-ad-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) .
+For more information about how access tokens are generated and validated, see [Authenticate via Microsoft Entra token](signalr-reference-data-plane-rest-api.md#authenticate-via-microsoft-entra-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) .
## Client and server endpoints A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service.
-In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security.
+In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security.
In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string:
services.AddSignalR().AddAzureSignalR("<connection_string>");
Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers).
-In a local development environment, the configuration is stored in a file (*appsettings.json* or *secrets.json*) or environment variables. You can use one of the following ways to configure connection string:
+In a local development environment, the configuration is stored in a file (_appsettings.json_ or _secrets.json_) or environment variables. You can use one of the following ways to configure connection string:
- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
+- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services. > [!NOTE]
-> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
+> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
### Configure multiple connection strings
There are also two ways to configure multiple instances:
```text Azure:SignalR:ConnectionString:<name>:<type>
- ```
+ ```
azure-signalr Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-disable-local-auth.md
Title: Disable local (access key) authentication with Azure SignalR Service
-description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure SignalR Service.
+description: This article provides information about how to disable access key authentication and use only Microsoft Entra authorization with Azure SignalR Service.
# Disable local (access key) authentication with Azure SignalR Service
-There are two ways to authenticate to Azure SignalR Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure SignalR Service resources when possible.
+There are two ways to authenticate to Azure SignalR Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID offers superior security and ease of use compared to the access key method.
+With Microsoft Entra ID, you do not need to store tokens in your code, reducing the risk of potential security vulnerabilities.
+We highly recommend using Microsoft Entra ID for your Azure SignalR Service resources whenever possible.
> [!IMPORTANT]
-> Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with current set of access keys will become unavailable.
+> Disabling local authentication can have following consequences.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with the current set of access keys will become unavailable.
## Use Azure portal
You can disable local authentication by setting `disableLocalAuth` property to t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "resource_name": {
- "defaultValue": "test-for-disable-aad",
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.SignalRService/SignalR",
- "apiVersion": "2022-08-01-preview",
- "name": "[parameters('resource_name')]",
- "location": "eastus",
- "sku": {
- "name": "Premium_P1",
- "tier": "Premium",
- "size": "P1",
- "capacity": 1
- },
- "kind": "SignalR",
- "properties": {
- "tls": {
- "clientCertEnabled": false
- },
- "features": [
- {
- "flag": "ServiceMode",
- "value": "Default",
- "properties": {}
- },
- {
- "flag": "EnableConnectivityLogs",
- "value": "True",
- "properties": {}
- }
- ],
- "cors": {
- "allowedOrigins": [
- "*"
- ]
- },
- "serverless": {
- "connectionTimeoutInSeconds": 30
- },
- "upstream": {},
- "networkACLs": {
- "defaultAction": "Deny",
- "publicNetwork": {
- "allow": [
- "ServerConnection",
- "ClientConnection",
- "RESTAPI",
- "Trace"
- ]
- },
- "privateEndpoints": []
- },
- "publicNetworkAccess": "Enabled",
- "disableLocalAuth": true,
- "disableAadAuth": false
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/SignalR",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "kind": "SignalR",
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "features": [
+ {
+ "flag": "ServiceMode",
+ "value": "Default",
+ "properties": {}
+ },
+ {
+ "flag": "EnableConnectivityLogs",
+ "value": "True",
+ "properties": {}
+ }
+ ],
+ "cors": {
+ "allowedOrigins": ["*"]
+ },
+ "serverless": {
+ "connectionTimeoutInSeconds": 30
+ },
+ "upstream": {},
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
} ```
You can assign the [Azure SignalR Service should have local authentication metho
See the following docs to learn about authentication methods. -- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)
- [Authenticate with Azure applications](./signalr-howto-authorize-application.md) - [Authenticate with managed identities](./signalr-howto-authorize-managed-identity.md)
azure-signalr Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-use-managed-identity.md
# Managed identities for Azure SignalR Service
-In Azure SignalR Service, you can use a managed identity from Azure Active Directory to:
+In Azure SignalR Service, you can use a managed identity from Microsoft Entra ID to:
- Obtain access tokens - Access secrets in Azure Key Vault
This article shows you how to create a managed identity for Azure SignalR Servic
To use a managed identity, you must have the following items: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An Azure SignalR resource.
+- An Azure SignalR resource.
- Upstream resources that you want to access. For example, an Azure Key Vault resource. - An Azure Function app. - ## Add a managed identity to Azure SignalR Service
-You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal.
+You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal.
### Add a system-assigned identity
To add a system-managed identity to your SignalR instance:
1. Browse to your SignalR instance in the Azure portal. 1. Select **Identity**.
-1. On the **System assigned** tab, switch **Status** to **On**.
+1. On the **System assigned** tab, switch **Status** to **On**.
1. Select **Save**.
- :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot showing Add a system-assigned identity in the portal.":::
1. Select **Yes** to confirm the change.
To add a user-assigned identity to your SignalR instance, you need to create the
1. On the **User assigned** tab, select **Add**. 1. Select the identity from the **User assigned managed identities** drop down menu. 1. Select **Add**.
- :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot showing Add a user-assigned identity in the portal.":::
## Use a managed identity in serverless scenarios
-Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request.
+Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request.
### Enable managed identity authentication in upstream settings
Once you've added a [system-assigned identity](#add-a-system-assigned-identity)
1. Browse to your SignalR instance. 1. Select **Settings** from the menu. 1. Select the **Serverless** service mode.
-1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings)
+1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings)
1. Select Add one Upstream Setting and select any asterisk go to **Upstream Settings**.
- :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings.":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings.":::
-1. Configure your upstream endpoint settings.
+1. Configure your upstream endpoint settings.
- :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings.":::
+ :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings.":::
1. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be one of the following formats:
- - Empty
- - Application (client) ID of the service principal
- - Application ID URI of the service principal
- - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).)
- > [!NOTE]
- > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests.
+ - Empty
+ - Application (client) ID of the service principal
+ - Application ID URI of the service principal
+ - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).)
+
+ > [!NOTE]
+ > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests.
### Validate access tokens
The token in the `Authorization` header is a [Microsoft identity platform access
To validate access tokens, your app should also validate the audience and the signing tokens. These tokens need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration).
-The Azure Active Directory (Azure AD) middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
+The Microsoft Entra ID middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
-Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
+Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Microsoft Entra authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
#### Authentication in Function App
You can easily set access validation for a Function App without code changes usi
1. Select **Authentication** from the menu. 1. Select **Add identity provider**. 1. In the **Basics** tab, select **Microsoft** from the **Identity provider** dropdown.
-1. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**.
-1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Azure AD provider, see [Configure your App Service or Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)
- :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Aad":::
+1. Select **Log in with Microsoft Entra ID** in **Action to take when request is not authenticated**.
+1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Microsoft Entra ID provider, see [Configure your App Service or Azure Functions app to login with Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md)
+ :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Microsoft Entra ID":::
1. Navigate to SignalR Service and follow the [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity. 1. go to **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously. After you configure these settings, the Function App will reject requests without an access token in the header. > [!IMPORTANT]
-> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)).
+> To pass the authentication, the _Issuer Url_ must match the _iss_ claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)).
-To verify the *Issuer Url* format in your Function app:
+To verify the _Issuer Url_ format in your Function app:
1. Go to the Function app in the portal. 1. Select **Authentication**. 1. Select **Identity provider**. 1. Select **Edit**. 1. Select **Issuer Url**.
-1. Verify that the *Issuer Url* has the format `https://sts.windows.net/<tenant-id>/`.
+1. Verify that the _Issuer Url_ has the format `https://sts.windows.net/<tenant-id>/`.
## Use a managed identity for Key Vault reference
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
ms.devlang: csharp + # Azure SignalR Service authentication
-This tutorial builds on the chat room application introduced in the quickstart. If you have not completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first.
+This tutorial builds on the chat room application introduced in the quickstart. If you haven't completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first.
-In this tutorial, you'll learn how to implement your own authentication and integrate it with the Microsoft Azure SignalR Service.
+In this tutorial, you can discover the process of creating your own authentication method and integrate it with the Microsoft Azure SignalR Service.
-The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach is not very useful in real-world applications where a rogue user would impersonate others to access sensitive data.
+The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach lacks effectiveness in real-world, as it fails to prevent malicious users who might assume false identities from gaining access to sensitive data.
-[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you will use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate.
+[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate.
For more information on the OAuth authentication APIs provided through GitHub, see [Basics of Authentication](https://developer.github.com/v3/guides/basics-of-authentication/).
The code for this tutorial is available for download in the [AzureSignalR-sample
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Register a new OAuth app with your GitHub account
-> * Add an authentication controller to support GitHub authentication
-> * Deploy your ASP.NET Core web app to Azure
+>
+> - Register a new OAuth app with your GitHub account
+> - Add an authentication controller to support GitHub authentication
+> - Deploy your ASP.NET Core web app to Azure
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
To complete this tutorial, you must have the following prerequisites:
1. Open a web browser and navigate to `https://github.com` and sign into your account.
-2. For your account, navigate to **Settings** > **Developer settings** and click **Register a new application**, or **New OAuth App** under *OAuth Apps*.
+2. For your account, navigate to **Settings** > **Developer settings** and select **Register a new application**, or **New OAuth App** under _OAuth Apps_.
-3. Use the following settings for the new OAuth App, then click **Register application**:
+3. Use the following settings for the new OAuth App, then select **Register application**:
- | Setting Name | Suggested Value | Description |
- | | | -- |
- | Application name | *Azure SignalR Chat* | The GitHub user should be able to recognize and trust the app they are authenticating with. |
- | Homepage URL | `http://localhost:5000` | |
- | Application description | *A chat room sample using the Azure SignalR Service with GitHub authentication* | A useful description of the application that will help your application users understand the context of the authentication being used. |
- | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the *AspNet.Security.OAuth.GitHub* package, */signin-github*. |
+ | Setting Name | Suggested Value | Description |
+ | -- | - | |
+ | Application name | _Azure SignalR Chat_ | The GitHub user should be able to recognize and trust the app they're authenticating with. |
+ | Homepage URL | `http://localhost:5000` | |
+ | Application description | _A chat room sample using the Azure SignalR Service with GitHub authentication_ | A useful description of the application that will help your application users understand the context of the authentication being used. |
+ | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the _AspNet.Security.OAuth.GitHub_ package, _/signin-github_. |
-4. Once the new OAuth app registration is complete, add the *Client ID* and *Client Secret* to Secret Manager using the following commands. Replace *Your_GitHub_Client_Id* and *Your_GitHub_Client_Secret* with the values for your OAuth app.
+4. Once the new OAuth app registration is complete, add the _Client ID_ and _Client Secret_ to Secret Manager using the following commands. Replace _Your_GitHub_Client_Id_ and _Your_GitHub_Client_Secret_ with the values for your OAuth app.
- ```dotnetcli
- dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id
- dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret
- ```
+ ```dotnetcli
+ dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id
+ dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret
+ ```
## Implement the OAuth flow ### Update the Startup class to support GitHub authentication
-1. Add a reference to the latest *Microsoft.AspNetCore.Authentication.Cookies* and *AspNet.Security.OAuth.GitHub* packages and restore all packages.
-
- ```dotnetcli
- dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656
- dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final
- dotnet restore
- ```
-
-1. Open *Startup.cs*, and add `using` statements for the following namespaces:
-
- ```csharp
- using System.Net.Http;
- using System.Net.Http.Headers;
- using System.Security.Claims;
- using Microsoft.AspNetCore.Authentication.Cookies;
- using Microsoft.AspNetCore.Authentication.OAuth;
- using Newtonsoft.Json.Linq;
- ```
-
-2. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets.
-
- ```csharp
- private const string GitHubClientId = "GitHubClientId";
- private const string GitHubClientSecret = "GitHubClientSecret";
- ```
-
-3. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app:
-
- ```csharp
- services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
- .AddCookie()
- .AddGitHub(options =>
- {
- options.ClientId = Configuration[GitHubClientId];
- options.ClientSecret = Configuration[GitHubClientSecret];
- options.Scope.Add("user:email");
- options.Events = new OAuthEvents
- {
- OnCreatingTicket = GetUserCompanyInfoAsync
- };
- });
- ```
-
-4. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class.
-
- ```csharp
- private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context)
- {
- var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint);
- request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
- request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken);
-
- var response = await context.Backchannel.SendAsync(request,
- HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted);
-
- var user = JObject.Parse(await response.Content.ReadAsStringAsync());
- if (user.ContainsKey("company"))
- {
- var company = user["company"].ToString();
- var companyIdentity = new ClaimsIdentity(new[]
- {
- new Claim("Company", company)
- });
- context.Principal.AddIdentity(companyIdentity);
- }
- }
- ```
-
-5. Update the `Configure` method of the Startup class with the following line of code, and save the file.
-
- ```csharp
- app.UseAuthentication();
- ```
+1. Add a reference to the latest _Microsoft.AspNetCore.Authentication.Cookies_ and _AspNet.Security.OAuth.GitHub_ packages and restore all packages.
+
+ ```dotnetcli
+ dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656
+ dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final
+ dotnet restore
+ ```
+
+1. Open _Startup.cs_, and add `using` statements for the following namespaces:
+
+ ```csharp
+ using System.Net.Http;
+ using System.Net.Http.Headers;
+ using System.Security.Claims;
+ using Microsoft.AspNetCore.Authentication.Cookies;
+ using Microsoft.AspNetCore.Authentication.OAuth;
+ using Newtonsoft.Json.Linq;
+ ```
+
+1. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets.
+
+ ```csharp
+ private const string GitHubClientId = "GitHubClientId";
+ private const string GitHubClientSecret = "GitHubClientSecret";
+ ```
+
+1. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app:
+
+ ```csharp
+ services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
+ .AddCookie()
+ .AddGitHub(options =>
+ {
+ options.ClientId = Configuration[GitHubClientId];
+ options.ClientSecret = Configuration[GitHubClientSecret];
+ options.Scope.Add("user:email");
+ options.Events = new OAuthEvents
+ {
+ OnCreatingTicket = GetUserCompanyInfoAsync
+ };
+ });
+ ```
+
+1. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class.
+
+ ```csharp
+ private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context)
+ {
+ var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint);
+ request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
+ request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken);
+
+ var response = await context.Backchannel.SendAsync(request,
+ HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted);
+
+ var user = JObject.Parse(await response.Content.ReadAsStringAsync());
+ if (user.ContainsKey("company"))
+ {
+ var company = user["company"].ToString();
+ var companyIdentity = new ClaimsIdentity(new[]
+ {
+ new Claim("Company", company)
+ });
+ context.Principal.AddIdentity(companyIdentity);
+ }
+ }
+ ```
+
+1. Update the `Configure` method of the Startup class with the following line of code, and save the file.
+
+ ```csharp
+ app.UseAuthentication();
+ ```
### Add an authentication controller In this section, you will implement a `Login` API that authenticates clients using the GitHub OAuth app. Once authenticated, the API will add a cookie to the web client response before redirecting the client back to the chat app. That cookie will then be used to identify the client.
-1. Add a new controller code file to the *chattest\Controllers* directory. Name the file *AuthController.cs*.
-
-2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory was not *chattest*:
-
- ```csharp
- using AspNet.Security.OAuth.GitHub;
- using Microsoft.AspNetCore.Authentication;
- using Microsoft.AspNetCore.Mvc;
-
- namespace chattest.Controllers
- {
- [Route("/")]
- public class AuthController : Controller
- {
- [HttpGet("login")]
- public IActionResult Login()
- {
- if (!User.Identity.IsAuthenticated)
- {
- return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme);
- }
-
- HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name);
- HttpContext.SignInAsync(User);
- return Redirect("/");
- }
- }
- }
- ```
+1. Add a new controller code file to the _chattest\Controllers_ directory. Name the file _AuthController.cs_.
+
+2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory wasn't _chattest_:
+
+ ```csharp
+ using AspNet.Security.OAuth.GitHub;
+ using Microsoft.AspNetCore.Authentication;
+ using Microsoft.AspNetCore.Mvc;
+
+ namespace chattest.Controllers
+ {
+ [Route("/")]
+ public class AuthController : Controller
+ {
+ [HttpGet("login")]
+ public IActionResult Login()
+ {
+ if (!User.Identity.IsAuthenticated)
+ {
+ return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme);
+ }
+
+ HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name);
+ HttpContext.SignInAsync(User);
+ return Redirect("/");
+ }
+ }
+ }
+ ```
3. Save your changes. ### Update the Hub class
-By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token is not associated with an authenticated identity. This access is actually anonymous access.
+By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token isn't associated with an authenticated identity.
+Basically, it's anonymous access.
In this section, you will turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim.
-1. Open *Hub\Chat.cs* and add references to these namespaces:
+1. Open _Hub\Chat.cs_ and add references to these namespaces:
- ```csharp
- using System.Threading.Tasks;
- using Microsoft.AspNetCore.Authorization;
- ```
+ ```csharp
+ using System.Threading.Tasks;
+ using Microsoft.AspNetCore.Authorization;
+ ```
2. Update the hub code as shown below. This code adds the `Authorize` attribute to the `Chat` class, and uses the user's authenticated identity in the hub methods. Also, the `OnConnectedAsync` method is added, which will log a system message to the chat room each time a new client connects.
- ```csharp
- [Authorize]
- public class Chat : Hub
- {
- public override Task OnConnectedAsync()
- {
- return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED");
- }
-
- // Uncomment this line to only allow user in Microsoft to send message
- //[Authorize(Policy = "Microsoft_Only")]
- public void BroadcastMessage(string message)
- {
- Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message);
- }
-
- public void Echo(string message)
- {
- var echoMessage = $"{message} (echo from server)";
- Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage);
- }
- }
- ```
+ ```csharp
+ [Authorize]
+ public class Chat : Hub
+ {
+ public override Task OnConnectedAsync()
+ {
+ return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED");
+ }
+
+ // Uncomment this line to only allow user in Microsoft to send message
+ //[Authorize(Policy = "Microsoft_Only")]
+ public void BroadcastMessage(string message)
+ {
+ Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message);
+ }
+
+ public void Echo(string message)
+ {
+ var echoMessage = $"{message} (echo from server)";
+ Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage);
+ }
+ }
+ ```
3. Save your changes. ### Update the web client code
-1. Open *wwwroot\https://docsupdatetracker.net/index.html* and replace the code that prompts for the username with code to use the cookie returned by the authentication controller.
-
- Remove the following code from *https://docsupdatetracker.net/index.html*:
-
- ```javascript
- // Get the user name and store it to prepend to messages.
- var username = generateRandomName();
- var promptMessage = 'Enter your name:';
- do {
- username = prompt(promptMessage, username);
- if (!username || username.startsWith('_') || username.indexOf('<') > -1 || username.indexOf('>') > -1) {
- username = '';
- promptMessage = 'Invalid input. Enter your name:';
- }
- } while(!username)
- ```
-
- Add the following code in place of the code above to use the cookie:
-
- ```javascript
- // Get the user name cookie.
- function getCookie(key) {
- var cookies = document.cookie.split(';').map(c => c.trim());
- for (var i = 0; i < cookies.length; i++) {
- if (cookies[i].startsWith(key + '=')) return unescape(cookies[i].slice(key.length + 1));
- }
- return '';
- }
- var username = getCookie('githubchat_username');
- ```
+1. Open _wwwroot\https://docsupdatetracker.net/index.html_ and replace the code that prompts for the username with code to use the cookie returned by the authentication controller.
+
+ Remove the following code from _https://docsupdatetracker.net/index.html_:
+
+ ```javascript
+ // Get the user name and store it to prepend to messages.
+ var username = generateRandomName();
+ var promptMessage = "Enter your name:";
+ do {
+ username = prompt(promptMessage, username);
+ if (
+ !username ||
+ username.startsWith("_") ||
+ username.indexOf("<") > -1 ||
+ username.indexOf(">") > -1
+ ) {
+ username = "";
+ promptMessage = "Invalid input. Enter your name:";
+ }
+ } while (!username);
+ ```
+
+ Add the following code in place of the code above to use the cookie:
+
+ ```javascript
+ // Get the user name cookie.
+ function getCookie(key) {
+ var cookies = document.cookie.split(";").map((c) => c.trim());
+ for (var i = 0; i < cookies.length; i++) {
+ if (cookies[i].startsWith(key + "="))
+ return unescape(cookies[i].slice(key.length + 1));
+ }
+ return "";
+ }
+ var username = getCookie("githubchat_username");
+ ```
2. Just beneath the line of code you added to use the cookie, add the following definition for the `appendMessage` function:
- ```javascript
- function appendMessage(encodedName, encodedMsg) {
- var messageEntry = createMessageEntry(encodedName, encodedMsg);
- var messageBox = document.getElementById('messages');
- messageBox.appendChild(messageEntry);
- messageBox.scrollTop = messageBox.scrollHeight;
- }
- ```
+ ```javascript
+ function appendMessage(encodedName, encodedMsg) {
+ var messageEntry = createMessageEntry(encodedName, encodedMsg);
+ var messageBox = document.getElementById("messages");
+ messageBox.appendChild(messageEntry);
+ messageBox.scrollTop = messageBox.scrollHeight;
+ }
+ ```
3. Update the `bindConnectionMessage` and `onConnected` functions with the following code to use `appendMessage`.
- ```javascript
- function bindConnectionMessage(connection) {
- var messageCallback = function(name, message) {
- if (!message) return;
- // Html encode display name and message.
- var encodedName = name;
- var encodedMsg = message.replace(/&/g, "&amp;").replace(/</g, "&lt;").replace(/>/g, "&gt;");
- appendMessage(encodedName, encodedMsg);
- };
- // Create a function that the hub can call to broadcast messages.
- connection.on('broadcastMessage', messageCallback);
- connection.on('echo', messageCallback);
- connection.onclose(onConnectionError);
- }
-
- function onConnected(connection) {
- console.log('connection started');
- document.getElementById('sendmessage').addEventListener('click', function (event) {
- // Call the broadcastMessage method on the hub.
- if (messageInput.value) {
- connection
- .invoke('broadcastMessage', messageInput.value)
- .catch(e => appendMessage('_BROADCAST_', e.message));
- }
-
- // Clear text box and reset focus for next comment.
- messageInput.value = '';
- messageInput.focus();
- event.preventDefault();
- });
- document.getElementById('message').addEventListener('keypress', function (event) {
- if (event.keyCode === 13) {
- event.preventDefault();
- document.getElementById('sendmessage').click();
- return false;
- }
- });
- document.getElementById('echo').addEventListener('click', function (event) {
- // Call the echo method on the hub.
- connection.send('echo', messageInput.value);
-
- // Clear text box and reset focus for next comment.
- messageInput.value = '';
- messageInput.focus();
- event.preventDefault();
- });
- }
- ```
-
-4. At the bottom of *https://docsupdatetracker.net/index.html*, update the error handler for `connection.start()` as shown below to prompt the user to log in.
-
- ```javascript
- connection.start()
- .then(function () {
- onConnected(connection);
- })
- .catch(function (error) {
- if (error) {
- if (error.message) {
- console.error(error.message);
- }
- if (error.statusCode && error.statusCode === 401) {
- appendMessage('_BROADCAST_', 'You\'re not logged in. Click <a href="/login">here</a> to login with GitHub.');
- }
- }
- });
- ```
+ ```javascript
+ function bindConnectionMessage(connection) {
+ var messageCallback = function (name, message) {
+ if (!message) return;
+ // Html encode display name and message.
+ var encodedName = name;
+ var encodedMsg = message
+ .replace(/&/g, "&amp;")
+ .replace(/</g, "&lt;")
+ .replace(/>/g, "&gt;");
+ appendMessage(encodedName, encodedMsg);
+ };
+ // Create a function that the hub can call to broadcast messages.
+ connection.on("broadcastMessage", messageCallback);
+ connection.on("echo", messageCallback);
+ connection.onclose(onConnectionError);
+ }
+
+ function onConnected(connection) {
+ console.log("connection started");
+ document
+ .getElementById("sendmessage")
+ .addEventListener("click", function (event) {
+ // Call the broadcastMessage method on the hub.
+ if (messageInput.value) {
+ connection
+ .invoke("broadcastMessage", messageInput.value)
+ .catch((e) => appendMessage("_BROADCAST_", e.message));
+ }
+
+ // Clear text box and reset focus for next comment.
+ messageInput.value = "";
+ messageInput.focus();
+ event.preventDefault();
+ });
+ document
+ .getElementById("message")
+ .addEventListener("keypress", function (event) {
+ if (event.keyCode === 13) {
+ event.preventDefault();
+ document.getElementById("sendmessage").click();
+ return false;
+ }
+ });
+ document
+ .getElementById("echo")
+ .addEventListener("click", function (event) {
+ // Call the echo method on the hub.
+ connection.send("echo", messageInput.value);
+
+ // Clear text box and reset focus for next comment.
+ messageInput.value = "";
+ messageInput.focus();
+ event.preventDefault();
+ });
+ }
+ ```
+
+4. At the bottom of _https://docsupdatetracker.net/index.html_, update the error handler for `connection.start()` as shown below to prompt the user to sign in.
+
+ ```javascript
+ connection
+ .start()
+ .then(function () {
+ onConnected(connection);
+ })
+ .catch(function (error) {
+ if (error) {
+ if (error.message) {
+ console.error(error.message);
+ }
+ if (error.statusCode && error.statusCode === 401) {
+ appendMessage(
+ "_BROADCAST_",
+ 'You\'re not logged in. Click <a href="/login">here</a> to login with GitHub.'
+ );
+ }
+ }
+ });
+ ```
5. Save your changes.
In this section, you will turn on real authentication by adding the `Authorize`
2. Build the app using the .NET Core CLI, execute the following command in the command shell:
- ```dotnetcli
- dotnet build
- ```
+ ```dotnetcli
+ dotnet build
+ ```
3. Once the build successfully completes, execute the following command to run the web app locally:
- ```dotnetcli
- dotnet run
- ```
+ ```dotnetcli
+ dotnet run
+ ```
- By default, the app will be hosted locally on port 5000:
+ The app is hosted locally on port 5000 by default:
- ```output
- E:\Testing\chattest>dotnet run
- Hosting environment: Production
- Content root path: E:\Testing\chattest
- Now listening on: http://localhost:5000
- Application started. Press Ctrl+C to shut down.
- ```
+ ```output
+ E:\Testing\chattest>dotnet run
+ Hosting environment: Production
+ Content root path: E:\Testing\chattest
+ Now listening on: http://localhost:5000
+ Application started. Press Ctrl+C to shut down.
+ ```
-4. Launch a browser window and navigate to `http://localhost:5000`. Click the **here** link at the top to log in with GitHub.
+4. Launch a browser window and navigate to `http://localhost:5000`. Select the **here** link at the top to sign in with GitHub.
- ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
+ ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
- You will be prompted to authorize the chat app's access to your GitHub account. Click the **Authorize** button.
+ You will be prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button.
- ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png)
+ ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png)
- You will be redirected back to the chat application and logged in with your GitHub account name. The web application determined you account name by authenticating you using the new authentication you added.
+ You will be redirected back to the chat application and logged in with your GitHub account name. The web application determined your account name by authenticating you using the new authentication you added.
- ![Account identified](media/signalr-concept-authenticate-oauth/signalr-oauth-account-identified.png)
+ ![Account identified](media/signalr-concept-authenticate-oauth/signalr-oauth-account-identified.png)
- Now that the chat app performs authentication with GitHub and stores the authentication information as cookies, you should deploy it to Azure so other users can authenticate with their accounts and communicate from other workstations.
+ With the chat app now performs authentication with GitHub and stores the authentication information as cookies, the next step involves deploying it to Azure.
+ This approach enables other users to authenticate using their respective accounts and communicate from various workstations.
## Deploy the app to Azure
Prepare your environment for the Azure CLI:
In this section, you will use the Azure CLI to create a new web app in [Azure App Service](../app-service/index.yml) to host your ASP.NET application in Azure. The web app will be configured to use local Git deployment. The web app will also be configured with your SignalR connection string, GitHub OAuth app secrets, and a deployment user.
-When creating the following resources, make sure to use the same resource group that your SignalR Service resource resides in. This approach will make clean up a lot easier later when you want to remove all the resources. The examples given assume you used the group name recommended in previous tutorials, *SignalRTestResources*.
+When creating the following resources, make sure to use the same resource group that your SignalR Service resource resides in. This approach will make clean up a lot easier later when you want to remove all the resources. The examples given assume you used the group name recommended in previous tutorials, _SignalRTestResources_.
### Create the web app and plan
az webapp create --name $WebAppName --resource-group $ResourceGroupName \
--plan $WebAppPlan ```
-| Parameter | Description |
-| -- | |
-| ResourceGroupName | This resource group name was suggested in previous tutorials. It is a good idea to keep all tutorial resources grouped together. Use the same resource group you used in the previous tutorials. |
-| WebAppPlan | Enter a new, unique, App Service Plan name. |
-| WebAppName | This will be the name of the new web app and part of the URL. Use a unique name. For example, signalrtestwebapp22665120. |
+| Parameter | Description |
+| -- | -- |
+| ResourceGroupName | This resource group name was suggested in previous tutorials. It's a good idea to keep all tutorial resources grouped together. Use the same resource group you used in the previous tutorials. |
+| WebAppPlan | Enter a new, unique, App Service Plan name. |
+| WebAppName | This parameter is the name of the new web app and part of the URL. Make it unique. For example, signalrtestwebapp22665120. |
### Add app settings to the web app In this section, you will add app settings for the following components:
-* SignalR Service resource connection string
-* GitHub OAuth app client ID
-* GitHub OAuth app client secret
+- SignalR Service resource connection string
+- GitHub OAuth app client ID
+- GitHub OAuth app client secret
Copy the text for the commands below and update the parameters. Paste the updated script into the Azure Cloud Shell, and press **Enter** to add the app settings:
ResourceGroupName=SignalRTestResources
SignalRServiceResource=mySignalRresourcename WebAppName=myWebAppName
-# Get the SignalR primary connection string
+# Get the SignalR primary connection string
primaryConnectionString=$(az signalr key list --name $SignalRServiceResource \ --resource-group $ResourceGroupName --query primaryConnectionString -o tsv)
az webapp config appsettings set --name $WebAppName \
--settings "GitHubClientSecret=$GitHubClientSecret" ```
-| Parameter | Description |
-| -- | |
-| GitHubClientId | Assign this variable the secret Client Id for your GitHub OAuth App. |
-| GitHubClientSecret | Assign this variable the secret password for your GitHub OAuth App. |
-| ResourceGroupName | Update this variable to be the same resource group name you used in the previous section. |
+| Parameter | Description |
+| - | -- |
+| GitHubClientId | Assign this variable the secret Client ID for your GitHub OAuth App. |
+| GitHubClientSecret | Assign this variable the secret password for your GitHub OAuth App. |
+| ResourceGroupName | Update this variable to be the same resource group name you used in the previous section. |
| SignalRServiceResource | Update this variable with the name of the SignalR Service resource you created in the quickstart. For example, signalrtestsvc48778624. |
-| WebAppName | Update this variable with the name of the new web app you created in the previous section. |
+| WebAppName | Update this variable with the name of the new web app you created in the previous section. |
### Configure the web app for local Git deployment
az webapp deployment source config-local-git --name $WebAppName \
--query [url] -o tsv ```
-| Parameter | Description |
-| -- | |
-| DeploymentUserName | Choose a new deployment user name. |
-| DeploymentUserPassword | Choose a password for the new deployment user. |
-| ResourceGroupName | Use the same resource group name you used in the previous section. |
-| WebAppName | This will be the name of the new web app you created previously. |
+| Parameter | Description |
+| - | -- |
+| DeploymentUserName | Choose a new deployment user name. |
+| DeploymentUserPassword | Choose a password for the new deployment user. |
+| ResourceGroupName | Use the same resource group name you used in the previous section. |
+| WebAppName | This parameter will be the name of the new web app you created previously. |
Make a note the Git deployment URL returned from this command. You will use this URL later.
To deploy your code, execute the following commands in a Git shell.
1. Navigate to the root of your project directory. If you don't have the project initialized with a Git repository, execute following command:
- ```bash
- git init
- ```
+ ```bash
+ git init
+ ```
2. Add a remote for the Git deployment URL you noted earlier:
- ```bash
- git remote add Azure <your git deployment url>
- ```
+ ```bash
+ git remote add Azure <your git deployment url>
+ ```
3. Stage all files in the initialized repository and add a commit.
- ```bash
- git add -A
- git commit -m "init commit"
- ```
+ ```bash
+ git add -A
+ git commit -m "init commit"
+ ```
4. Deploy your code to the web app in Azure.
- ```bash
- git push Azure main
- ```
+ ```bash
+ git push Azure main
+ ```
- You will be prompted to authenticate in order to deploy the code to Azure. Enter the user name and password of the deployment user you created above.
+ You will be prompted to authenticate in order to deploy the code to Azure. Enter the user name and password of the deployment user you created above.
### Update the GitHub OAuth app
The last thing you need to do is update the **Homepage URL** and **Authorization
1. Open [https://github.com](https://github.com) in a browser and navigate to your account's **Settings** > **Developer settings** > **Oauth Apps**.
-2. Click on your authentication app and update the **Homepage URL** and **Authorization callback URL** as shown below:
+2. Select on your authentication app and update the **Homepage URL** and **Authorization callback URL** as shown below:
- | Setting | Example |
- | - | - |
- | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` |
- | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` |
+ | Setting | Example |
+ | -- | - |
+ | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` |
+ | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` |
3. Navigate to your web app URL and test the application.
- ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
+ ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
## Clean up resources
Otherwise, if you are finished with the quickstart sample application, you can d
> [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
-Sign in to the [Azure portal](https://portal.azure.com) and click **Resource groups**.
+Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *SignalRTestResources*. On your resource group in the result list, click **...** then **Delete resource group**.
+In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named _SignalRTestResources_. On your resource group in the result list, click **...** then **Delete resource group**.
![Delete](./media/signalr-concept-authenticate-oauth/signalr-delete-resource-group.png)
-You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and click **Delete**.
+You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted.
After a few moments, the resource group and all of its contained resources are d
In this tutorial, you added authentication with OAuth to provide a better approach to authentication with Azure SignalR Service. To learn more about using Azure SignalR Server, continue to the Azure CLI samples for SignalR Service.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Azure SignalR CLI Samples](./signalr-reference-cli.md)
azure-signalr Signalr Concept Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authorize-azure-active-directory.md
Title: Authorize access with Azure Active Directory for Azure SignalR Service
-description: This article provides information on authorizing access to Azure SignalR Service resources using Azure Active Directory.
+ Title: Authorize access with Microsoft Entra ID for Azure SignalR Service
+description: This article provides information on authorizing access to Azure SignalR Service resources using Microsoft Entra ID.
Last updated 09/06/2021
-# Authorize access with Azure Active Directory for Azure SignalR Service
+# Authorize access with Microsoft Entra ID for Azure SignalR Service
-Azure SignalR Service supports Azure Active Directory (Azure AD) to authorize requests to SignalR resources. With Azure AD, you can use role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Azure AD, which returns an OAuth 2.0 token. The token is used to authorize a request against the SignalR resource.
+Azure SignalR Service supports Microsoft Entra ID for authorizing requests to SignalR resources. With Microsoft Entra ID, you can utilize role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Microsoft Entra ID, which returns an OAuth 2.0 token. The token is then used to authorize a request against the SignalR resource.
-Authorizing requests against SignalR with Azure AD provides superior security and ease of use over Access Key authorization. It's recommended using Azure AD authorization with your SignalR resources when possible to assure access with minimum required privileges.
+Authorizing requests against SignalR with Microsoft Entra ID provides superior security and ease of use compared to Access Key authorization. It is highly recommended to use Microsoft Entra ID for authorizing whenever possible, as it ensures access with the minimum required privileges.
<a id="security-principal"></a>
-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.*
+_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._
> [!IMPORTANT] > Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with access keys will no longer be available.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with access keys will no longer be available.
## Overview of Azure AD for SignalR
-When a security principal attempts to access a SignalR resource, the request must be authorized. With Azure AD, access to a resource requires 2 steps.
+When a security principal attempts to access a SignalR resource, the request must be authorized. With Microsoft Entra ID, access to a resource requires 2 steps.
-1. The security principal has to be authenticated by Azure, who will return an OAuth 2.0 token.
+1. The security principal has to be authenticated by Microsoft Entra ID, which will then return an OAuth 2.0 token.
1. The token is passed as part of a request to the SignalR resource to authorize access to the resource.
-### Client-side authentication while using Azure AD
+### Client-side authentication with Microsoft Entra ID
-When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key.
+When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key.
-Using Azure AD there is no shared key. Instead SignalR uses a **temporary access key** to sign tokens for client connections. The workflow contains four steps.
+When using Microsoft Entra ID, there is no shared key. Instead, SignalR uses a **temporary access key** for signing tokens used in client connections. The workflow contains four steps.
-1. The security principal requires an OAuth 2.0 token from Azure to authenticate itself.
+1. The security principal requires an OAuth 2.0 token from Microsoft Entra ID to authenticate itself.
2. The security principal calls SignalR Auth API to get a **temporary access key**. 3. The security principal signs a client token with the **temporary access key** for client connections during negotiation. 4. The client uses the client token to connect to Azure SignalR resources.
-The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour.
+The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour.
The workflow is built in the [Azure SignalR SDK for app server](https://github.com/Azure/azure-signalr). ## Assign Azure roles for access rights
-Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources.
+Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources.
### Resource scope
You may have to determine the scope of access that the security principal should
You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope:
-| Scope | Description |
-|-|-|
-|**An individual resource.**| Applies to only the target resource.|
-| **A resource group.** |Applies to all of the resources in a resource group.|
-| **A subscription.** | Applies to all of the resources in a subscription.|
-| **A management group.** |Applies to all of the resources in the subscriptions included in a management group.|
-
+| Scope | Description |
+| | |
+| **An individual resource.** | Applies to only the target resource. |
+| **A resource group.** | Applies to all of the resources in a resource group. |
+| **A subscription.** | Applies to all of the resources in a subscription. |
+| **A management group.** | Applies to all of the resources in the subscriptions included in a management group. |
## Azure built-in roles for SignalR resources
-|Role|Description|Use case|
-|-|-|-|
-|[SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server)|Access to Websocket connection creation API and Auth APIs.|Most commonly for an App Server.|
-|[SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner)|Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs.|Use for **Serverless mode** for Authorization with Azure AD since it requires both REST APIs permissions and Auth API permissions.|
-|[SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner)|Full access to data-plane REST APIs.|Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs.|
-|[SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader)|Read-only access to data-plane REST APIs.| Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs.|
+| Role | Description | Use case |
+| - | | -- |
+| [SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server) | Access to Websocket connection creation API and Auth APIs. | Most commonly for an App Server. |
+| [SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner) | Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs. | Use for **Serverless mode** for Authorization with Microsoft Entra ID since it requires both REST APIs permissions and Auth API permissions. |
+| [SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner) | Full access to data-plane REST APIs. | Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs. |
+| [SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader) | Read-only access to data-plane REST APIs. | Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs. |
## Next steps
-To learn how to create an Azure application and use Azure AD Auth, see:
+To learn how to create an Azure application and use Microsoft Entra authorization, see:
-- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)
+- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md)
-To learn how to configure a managed identity and use Azure AD Auth, see:
+To learn how to configure a managed identity and use Microsoft Entra authorization, see:
-- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)
+- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md)
To learn more about roles and role assignments, see:
To learn how to create custom roles, see:
- [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
-To learn how to use only Azure AD authentication, see
-- [Disable local authentication](./howto-disable-local-auth.md)
+To learn how to use only Azure AD authentication, see:
+
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Concept Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md
Title: Real-time apps with Azure SignalR Service and Azure Functions
+ Title: Real-time apps with Azure SignalR Service and Azure Functions
description: Learn about how Azure SignalR Service and Azure Functions together allow you to create real-time serverless web applications.
# Real-time apps with Azure SignalR Service and Azure Functions
+Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together.
-Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together.
-
-Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment.
--
+Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment.
## Integrate real-time communications with Azure services The Azure Functions service allows you to write code in [several languages](../azure-functions/supported-languages.md), including JavaScript, Python, C#, and Java that triggers whenever events occur in the cloud. Examples of these events include:
-* HTTP and webhook requests
-* Periodic timers
-* Events from Azure services, such as:
- - Event Grid
- - Event Hubs
- - Service Bus
- - Azure Cosmos DB change feed
- - Storage blobs and queues
- - Logic Apps connectors such as Salesforce and SQL Server
+- HTTP and webhook requests
+- Periodic timers
+- Events from Azure services, such as:
+ - Event Grid
+ - Event Hubs
+ - Service Bus
+ - Azure Cosmos DB change feed
+ - Storage blobs and queues
+ - Logic Apps connectors such as Salesforce and SQL Server
By using Azure Functions to integrate these events with Azure SignalR Service, you have the ability to notify thousands of clients whenever events occur. Some common scenarios for real-time serverless messaging that you can implement with Azure Functions and SignalR Service include:
-* Visualize IoT device telemetry on a real-time dashboard or map.
-* Update data in an application when documents update in Azure Cosmos DB.
-* Send in-app notifications when new orders are created in Salesforce.
+- Visualize IoT device telemetry on a real-time dashboard or map.
+- Update data in an application when documents update in Azure Cosmos DB.
+- Send in-app notifications when new orders are created in Salesforce.
## SignalR Service bindings for Azure Functions The SignalR Service bindings for Azure Functions allow an Azure Function app to publish messages to clients connected to SignalR Service. Clients can connect to the service using a SignalR client SDK that is available in .NET, JavaScript, and Java, with more languages coming soon.+ <!-- Are there more lanaguages now? --> ### An example scenario
An example of how to use the SignalR Service bindings is using Azure Functions t
### Authentication and users
-SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Azure Active Directory, Facebook, and Twitter. You can then send messages directly to these authenticated users.
+SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Microsoft Entra ID, Facebook, and Twitter. You can then send messages directly to these authenticated users.
## Next steps For full details on how to use Azure Functions and SignalR Service together visit the following resources:
-* [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md)
-* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
+- [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md)
+- [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
To try out the SignalR Service bindings for Azure Functions, see:
-* [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md)
-* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
-* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
+- [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md)
+- [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md)
+- [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
azure-signalr Signalr Concept Serverless Development Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md
In the Azure portal, locate the **Settings** page of your SignalR Service resour
A serverless real-time application built with Azure Functions and Azure SignalR Service requires at least two Azure Functions:
-* A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL.
-* One or more functions that handle messages sent from SignalR Service to clients.
+- A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL.
+- One or more functions that handle messages sent from SignalR Service to clients.
### negotiate function
For more information, see the [`SignalR` output binding reference](../azure-func
### SignalR Hubs
-SignalR has a concept of *hubs*. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces.
+SignalR has a concept of _hubs_. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces.
## Class-based model The class-based model is dedicated for C#. The class-based model provides a consistent SignalR server-side programming experience, with the following features:
-* Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name.
-* Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order.
-* Convenient output and negotiate experience.
+- Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name.
+- Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order.
+- Convenient output and negotiate experience.
The following code demonstrates these features:
All the hub methods **must** have an argument of `InvocationContext` decorated b
By default, `category=messages` except the method name is one of the following names:
-* `OnConnected`: Treated as `category=connections, event=connected`
-* `OnDisconnected`: Treated as `category=connections, event=disconnected`
+- `OnConnected`: Treated as `category=connections, event=connected`
+- `OnDisconnected`: Treated as `category=connections, event=disconnected`
### Parameter binding experience In class based model, `[SignalRParameter]` is unnecessary because all the arguments are marked as `[SignalRParameter]` by default except in one of the following situations:
-* The argument is decorated by a binding attribute
-* The argument's type is `ILogger` or `CancellationToken`
-* The argument is decorated by attribute `[SignalRIgnore]`
+- The argument is decorated by a binding attribute
+- The argument's type is `ILogger` or `CancellationToken`
+- The argument is decorated by attribute `[SignalRIgnore]`
### Negotiate experience in class-based model
SignalR client SDKs already contain the logic required to perform the negotiatio
```javascript const connection = new signalR.HubConnectionBuilder()
- .withUrl('https://my-signalr-function-app.azurewebsites.net/api')
- .build()
+ .withUrl("https://my-signalr-function-app.azurewebsites.net/api")
+ .build();
``` By convention, the SDK automatically appends `/negotiate` to the URL and uses it to begin the negotiation.
By convention, the SDK automatically appends `/negotiate` to the URL and uses it
For more information on how to use the SignalR client SDK, see the documentation for your language:
-* [.NET Standard](/aspnet/core/signalr/dotnet-client)
-* [JavaScript](/aspnet/core/signalr/javascript-client)
-* [Java](/aspnet/core/signalr/java-client)
+- [.NET Standard](/aspnet/core/signalr/dotnet-client)
+- [JavaScript](/aspnet/core/signalr/javascript-client)
+- [Java](/aspnet/core/signalr/java-client)
### Sending messages from a client to the service If you've [upstream](concept-upstream.md) configured for your SignalR resource, you can send messages from a client to your Azure Functions using any SignalR client. Here's an example in JavaScript: ```javascript
-connection.send('method1', 'arg1', 'arg2');
+connection.send("method1", "arg1", "arg2");
``` ## Azure Functions configuration
The JavaScript/TypeScript client makes HTTP request to the negotiate function to
#### Localhost
-When running the Function app on your local computer, you can add a `Host` section to *local.settings.json* to enable CORS. In the `Host` section, add two properties:
+When running the Function app on your local computer, you can add a `Host` section to _local.settings.json_ to enable CORS. In the `Host` section, add two properties:
-* `CORS` - enter the base URL that is the origin the client application
-* `CORSCredentials` - set it to `true` to allow "withCredentials" requests
+- `CORS` - enter the base URL that is the origin the client application
+- `CORSCredentials` - set it to `true` to allow "withCredentials" requests
Example:
Configure your SignalR clients to use the API Management URL.
### Using App Service Authentication
-Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Azure Active Directory. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
+Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID.
-In the Azure portal, in your Function app's *Platform features* tab, open the *Authentication/authorization* settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice.
+In the Azure portal, in your Function app's _Platform features_ tab, open the _Authentication/authorization_ settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice.
Once configured, authenticated HTTP requests will include `x-ms-client-principal-name` and `x-ms-client-principal-id` headers containing the authenticated identity's username and user ID, respectively.
azure-signalr Signalr Howto Authorize Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md
Title: Authorize request to SignalR resources with Azure AD from Azure applications
-description: This article provides information about authorizing request to SignalR resources with Azure AD from Azure applications
+ Title: Authorize requests to SignalR resources with Microsoft Entra applications
+description: This article provides information about authorizing request to SignalR resources with Microsoft Entra applications
Last updated 02/03/2023
ms.devlang: csharp
-# Authorize request to SignalR resources with Azure AD from Azure applications
+# Authorize requests to SignalR resources with Microsoft Entra applications
-Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md).
+Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra applications](../active-directory/develop/app-objects-and-service-principals.md).
-This article shows how to configure your SignalR resource and codes to authorize the request to a SignalR resource from an Azure application.
+This article shows how to configure your SignalR resource and codes to authorize requests to a SignalR resource from a Microsoft Entra application.
## Register an application
-The first step is to register an Azure application.
+The first step is to register a Microsoft Entra application.
-1. On the [Azure portal](https://portal.azure.com/), search for and select **Azure Active Directory**
+1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID**
2. Under **Manage** section, select **App registrations**. 3. Select **New registration**.-
- ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png)
-
+ ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Select **Register** to confirm the register.
Once you have your application registered, you can find the **Application (clien
![Screenshot of an application.](./media/signalr-howto-authorize-application/application-overview.png) To learn more about registering an application, see-- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
+- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
## Add credentials
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, select **New client secret**.
-![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png)
+ ![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**.
-1. Copy the value of the **client secret** and then paste it to a secure location.
- > [!NOTE]
- > The secret will display only once.
+1. Copy the value of the **client secret** and then paste it to a secure location.
+ > [!NOTE]
+ > The secret will display only once.
### Certificate
To learn more about adding credentials, see
The following steps describe how to assign a `SignalR App Server` role to a service principal (application) over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-> [!Note]
+> [!NOTE]
> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) 1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
The following steps describe how to assign a `SignalR App Server` role to a serv
> Azure role assignments may take up to 30 minutes to propagate. To learn more about how to assign and manage Azure role assignments, see these articles:+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
To learn more about how to assign and manage Azure role assignments, see these a
The best practice is to configure identity and credentials in your environment variables:
-| Variable | Description |
-||
-| `AZURE_TENANT_ID` | The Azure Active Directory tenant(directory) ID. |
-| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. |
-| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. |
+| Variable | Description |
+| - | |
+| `AZURE_TENANT_ID` | The Microsoft Entra tenant ID. |
+| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. |
+| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. |
| `AZURE_CLIENT_CERTIFICATE_PATH` | A path to a certificate and private key pair in PEM or PFX format, which can authenticate the App Registration. |
-| `AZURE_USERNAME` | The username, also known as upn, of an Azure Active Directory user account. |
-| `AZURE_PASSWORD` | The password for the Azure Active Directory user account. Password isn't supported for accounts with MFA enabled. |
+| `AZURE_USERNAME` | The username, also known as upn, of a Microsoft Entra user account. |
+| `AZURE_PASSWORD` | The password of the Microsoft Entra user account. Password isn't supported for accounts with MFA enabled. |
You can use either [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) or [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) to configure your SignalR endpoints.
services.AddSignalR().AddAzureSignalR(option =>
To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential Class](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
-#### Use different credentials while using multiple endpoints.
+#### Use different credentials while using multiple endpoints
For some reason, you may want to use different credentials for different endpoints.
services.AddSignalR().AddAzureSignalR(option =>
### Azure Functions SignalR bindings
-Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Azure application identities to access your SignalR resources.
+Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Microsoft Entra application identities to access your SignalR resources.
-Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample.
+Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample.
-Then you choose to configure your Azure application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables).
+Then you choose to configure your Microsoft Entra application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables).
#### Configure identity in pre-defined environment variables See [Environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables) for the list of pre-defined environment variables. When you have multiple services, we recommend that you use the same application identity, so that you don't need to configure the identity for each service. These environment variables might also be used by other services according to the settings of other services.
-For example, to use client secret credentials, configure as follows in the `local.settings.json` file.
+For example, to use client secret credentials, configure as follows in the `local.settings.json` file.
+ ```json { "Values": { "<CONNECTION_NAME_PREFIX>:serviceUri": "https://<SIGNALR_RESOURCE_NAME>.service.signalr.net",
- "AZURE_CLIENT_ID":"...",
- "AZURE_CLIENT_SECRET":"...",
- "AZURE_TENANT_ID":"..."
+ "AZURE_CLIENT_ID": "...",
+ "AZURE_CLIENT_SECRET": "...",
+ "AZURE_TENANT_ID": "..."
} } ```+ On Azure portal, add settings as follows:
-```
+
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net AZURE_CLIENT_ID = ... AZURE_TENANT_ID = ... AZURE_CLIENT_SECRET = ...
- ```
+```
#### Configure identity in SignalR specified variables The SignalR specified variables share the same key prefix with `serviceUri` key. Here's the list of variables you might use:
-* clientId
-* clientSecret
-* tenantId
+
+- clientId
+- clientSecret
+- tenantId
Here are the samples to use client secret credentials: In the `local.settings.json` file:+ ```json { "Values": {
In the `local.settings.json` file:
``` On Azure portal, add settings as follows:
-```
+
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__clientId = ... <CONNECTION_NAME_PREFIX>__clientSecret = ... <CONNECTION_NAME_PREFIX>__tenantId = ... ```+ ## Next steps See the following related articles:-- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)+
+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
Title: Authorize managed identity requests to a SignalR resource
-description: This article provides information about authorizing request to SignalR resources with Azure AD from managed identities
+ Title: Authorize requests to SignalR resources with Microsoft Entra managed identities
+description: This article provides information about authorizing request to SignalR resources with Microsoft Entra managed identities
Last updated 03/28/2023
ms.devlang: csharp
-# Authorize managed identity requests to a SignalR resource
+# Authorize requests to SignalR resources with Microsoft Entra managed identities
-Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [managed identities for Azure resources
+Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra managed identities
](../active-directory/managed-identities-azure-resources/overview.md).
-This article shows how to configure your SignalR resource and code to authorize a managed identity request to a SignalR resource.
+This article shows how to configure your SignalR resource and code to authorize requests to a SignalR resource from a managed identity.
## Configure managed identities
This example shows you how to configure `System-assigned managed identity` on a
![Screenshot of an application.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png) 1. Select the **Save** button to confirm the change. - To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) To learn more about configuring managed identities, see one of these articles:
See [How to use managed identities for App Service and Azure Functions](../app-s
The following steps describe how to assign a `SignalR App Server` role to a system-assigned identity over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-> [!Note]
+> [!NOTE]
> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) 1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource.
The following steps describe how to assign a `SignalR App Server` role to a syst
> Azure role assignments may take up to 30 minutes to propagate. To learn more about how to assign and manage Azure role assignments, see these articles:+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
services.AddSignalR().AddAzureSignalR(option =>
#### Using user-assigned identity
-Provide `ClientId` while creating the `ManagedIdentityCredential` object.
+Provide `ClientId` while creating the `ManagedIdentityCredential` object.
> [!IMPORTANT] > Use **Client Id**, not the Object (principal) ID even if they are both GUID!
You might need a group of key-value pairs to configure an identity. The keys of
If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). In the Azure portal, use the following example to configure a `DefaultAzureCredential`. If you don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity is used to authenticate.
-```
+
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net ``` Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts are attempted in order.+ ```json { "Values": {
Here's a config sample of `DefaultAzureCredential` in the `local.settings.json`
If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with the connection name prefix to `managedidentity`. Here's an application settings sample:
-```
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity ```
If you want to use system-assigned identity independently and without the influe
If you want to use user-assigned identity, you need to assign `clientId`in addition to the `serviceUri` and `credential` keys with the connection name prefix. Here's the application settings sample:
-```
+```bash
<CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity <CONNECTION_NAME_PREFIX>__clientId = <CLIENT_ID>
If you want to use user-assigned identity, you need to assign `clientId`in addit
## Next steps See the following related articles:-- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)+
+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md)
+- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-signalr Signalr Howto Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-azure-policy.md
The following built-in policy definitions are specific to Azure SignalR Service:
## Assign policy definitions
-* Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
-* Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). SignalR policy assignments apply to existing and new SignalR resources within the scope.
-* Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+- Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+- Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). SignalR policy assignments apply to existing and new SignalR resources within the scope.
+- Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
> [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
When a resource is non-compliant, there are many possible reasons. To determine
1. Select **All services**, and search for **Policy**. 1. Select **Compliance**. 1. Use the filters to limit compliance states or to search for policies
-
- [ ![Policy compliance in portal](./media/signalr-howto-azure-policy/azure-policy-compliance.png) ](./media/signalr-howto-azure-policy/azure-policy-compliance.png#lightbox)
-2. Select a policy to review aggregate compliance details and events. If desired, then select a specific SignalR for resource compliance.
+
+ [ ![Screenshot showing policy compliance in portal.](./media/signalr-howto-azure-policy/azure-policy-compliance.png) ](./media/signalr-howto-azure-policy/azure-policy-compliance.png#lightbox)
+
+1. Select a policy to review aggregate compliance details and events. If desired, then select a specific SignalR for resource compliance.
### Policy compliance in the Azure CLI
az policy state list \
## Next steps
-* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
+- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
-
-* Learn more about [governance capabilities](../governance/index.yml) in Azure
+- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about [governance capabilities](../governance/index.yml) in Azure
<!-- LINKS - External -->
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
Platform metrics and the Activity log are collected and stored automatically, bu
Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
+Resource Logs are grouped into Category groups. Category groups are a collection of different logs to help you achieve different monitoring goals. These groups are defined dynamically and may change over time as new resource logs become available and are added to the category group. Note that this may incur additionally charges. The audit resource log category group allows you to select the resource logs that are necessary for auditing your resource. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../azure-monitor/essentials/diagnostic-settings.md?tabs=portal#resource-logs).
+
+For the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
The metrics and logs you can collect are discussed in the following sections.
azure-signalr Signalr Howto Troubleshoot Live Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md
Live trace tool is a single web application for capturing and displaying live tr
> [!NOTE] > Note that the live traces will be counted as outbound messages.
-> Azure Active Directory access to live trace tool is not supported. You will need to enable **Access Key** in **Keys** settings.
+> Using Microsoft Entra ID to access the live trace tool is not supported. You have to enable **Access Key** in **Keys** settings.
## Launch the live trace tool
azure-signalr Signalr Howto Work With Apim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-apim.md
Azure API Management service provides a hybrid, multicloud management platform f
:::image type="content" source="./media/signalr-howto-work-with-apim/architecture.png" alt-text="Diagram that shows the architecture of using SignalR Service with API Management."::: - ## Create resources
-* Follow [Quickstart: Use an ARM template to deploy Azure SignalR](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
+- Follow [Quickstart: Use an ARM template to deploy Azure SignalR](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
-* Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance **_APIM1_**
+- Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance **_APIM1_**
## Configure APIs
Azure API Management service provides a hybrid, multicloud management platform f
There are two types of requests for a SignalR client:
-* **negotiate request**: HTTP `POST` request to `<APIM-URL>/client/negotiate/`
-* **connect request**: request to `<APIM-URL>/client/`, it could be `WebSocket` or `ServerSentEvent` or `LongPolling` depends on the transport type of your SignalR client
+- **negotiate request**: HTTP `POST` request to `<APIM-URL>/client/negotiate/`
+- **connect request**: request to `<APIM-URL>/client/`, it could be `WebSocket` or `ServerSentEvent` or `LongPolling` depends on the transport type of your SignalR client
The type of **connect request** varies depending on the transport type of the SignalR clients. As for now, API Management doesn't yet support different types of APIs for the same suffix. With this limitation, when using API Management, your SignalR client doesn't support fallback from `WebSocket` transport type to other transport types. Fallback from `ServerSentEvent` to `LongPolling` could be supported. Below sections describe the detailed configurations for different transport types. ### Configure APIs when client connects with `WebSocket` transport This section describes the steps to configure API Management when the SignalR clients connect with `WebSocket` transport. When SignalR clients connect with `WebSocket` transport, three types of requests are involved:+ 1. **OPTIONS** preflight HTTP request for negotiate 1. **POST** HTTP request for negotiate 1. WebSocket request for connect Let's configure API Management from the portal.+ 1. Go to **APIs** tab in the portal for API Management instance **_APIM1_**, select **Add API** and choose **HTTP**, **Create** with the following parameters:
- * Display name: `SignalR negotiate`
- * Web service URL: `https://<your-signalr-service-url>/client/negotiate/`
- * API URL suffix: `client/negotiate/`
+ - Display name: `SignalR negotiate`
+ - Web service URL: `https://<your-signalr-service-url>/client/negotiate/`
+ - API URL suffix: `client/negotiate/`
1. Select the created `SignalR negotiate` API, **Save** with below settings:
- 1. In **Design** tab
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate preflight`
- * URL: `OPTIONS` `/`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate`
- * URL: `POST` `/`
- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+ 1. In **Design** tab
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate preflight`
+ - URL: `OPTIONS` `/`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate`
+ - URL: `POST` `/`
+ 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
1. Select **Add API** and choose **WebSocket**, **Create** with the following parameters:
- * Display name: `SignalR connect`
- * WebSocket URL: `wss://<your-signalr-service-url>/client/`
- * API URL suffix: `client/`
+ - Display name: `SignalR connect`
+ - WebSocket URL: `wss://<your-signalr-service-url>/client/`
+ - API URL suffix: `client/`
1. Select the created `SignalR connect` API, **Save** with below settings:
- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+ 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
Now API Management is successfully configured to support SignalR client with `WebSocket` transport. ### Configure APIs when client connects with `ServerSentEvents` or `LongPolling` transport This section describes the steps to configure API Management when the SignalR clients connect with `ServerSentEvents` or `LongPolling` transport type. When SignalR clients connect with `ServerSentEvents` or `LongPolling` transport, five types of requests are involved:+ 1. **OPTIONS** preflight HTTP request for negotiate 1. **POST** HTTP request for negotiate 1. **OPTIONS** preflight HTTP request for connect
This section describes the steps to configure API Management when the SignalR cl
Now let's configure API Management from the portal. 1. Go to **APIs** tab in the portal for API Management instance **_APIM1_**, select **Add API** and choose **HTTP**, **Create** with the following parameters:
- * Display name: `SignalR`
- * Web service URL: `https://<your-signalr-service-url>/client`
- * API URL suffix: `client`
+ - Display name: `SignalR`
+ - Web service URL: `https://<your-signalr-service-url>/client`
+ - API URL suffix: `client`
1. Select the created `SignalR` API, **Save** with below settings:
- 1. In **Design** tab
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate preflight`
- * URL: `OPTIONS` `/negotiate`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `negotiate`
- * URL: `POST` `/negotiate`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `connect preflight`
- * URL: `OPTIONS` `/`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `connect`
- * URL: `POST` `/`
- 1. Select **Add operation**, and **Save** with the following parameters:
- * Display name: `connect get`
- * URL: `GET` `/`
- 1. Select the newly added **connect get** operation, and edit the Backend policy to disable buffering for `ServerSentEvents`, [check here](../api-management/how-to-server-sent-events.md) for more details.
- ```xml
- <backend>
- <forward-request buffer-response="false" />
- </backend>
- ```
- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+ 1. In **Design** tab
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate preflight`
+ - URL: `OPTIONS` `/negotiate`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `negotiate`
+ - URL: `POST` `/negotiate`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `connect preflight`
+ - URL: `OPTIONS` `/`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `connect`
+ - URL: `POST` `/`
+ 1. Select **Add operation**, and **Save** with the following parameters:
+ - Display name: `connect get`
+ - URL: `GET` `/`
+ 1. Select the newly added **connect get** operation, and edit the Backend policy to disable buffering for `ServerSentEvents`, [check here](../api-management/how-to-server-sent-events.md) for more details.
+ ```xml
+ <backend>
+ <forward-request buffer-response="false" />
+ </backend>
+ ```
+ 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
Now API Management is successfully configured to support SignalR client with `ServerSentEvents` or `LongPolling` transport. ### Run chat+ Now, the traffic can reach SignalR Service through API Management. LetΓÇÖs use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally.
-* First let's get the connection string of **_ASRS1_**
- * On the **Connection strings** tab of **_ASRS1_**
- * **Client endpoint**: Enter the URL using **Gateway URL** of **_APIM1_**, for example `https://apim1.azure-api.net`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
- * Copy the Connection string
-
-* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
-* Go to samples/Chatroom folder:
-* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
-
- ```bash
- cd samples/Chatroom
- dotnet restore
- dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
- dotnet run
- ```
-* Configure transport type for the client
-
- Open `https://docsupdatetracker.net/index.html` under folder `wwwroot` and find the code when `connection` is created, update it to specify the transport type.
-
- For example, to specify the connection to use server-sent-events or long polling, update the code to:
-
- ```javascript
- const connection = new signalR.HubConnectionBuilder()
- .withUrl('/chat', signalR.HttpTransportType.ServerSentEvents | signalR.HttpTransportType.LongPolling)
- .build();
- ```
- To specify the connection to use WebSockets, update the code to:
-
- ```javascript
- const connection = new signalR.HubConnectionBuilder()
- .withUrl('/chat', signalR.HttpTransportType.WebSockets)
- .build();
- ```
-
-* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the connection is established through **_APIM1_**
+- First let's get the connection string of **_ASRS1_**
+
+ - On the **Connection strings** tab of **_ASRS1_**
+ - **Client endpoint**: Enter the URL using **Gateway URL** of **_APIM1_**, for example `https://apim1.azure-api.net`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
+ - Copy the Connection string
+
+- Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
+- Go to samples/Chatroom folder:
+- Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
+
+ ```bash
+ cd samples/Chatroom
+ dotnet restore
+ dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
+ dotnet run
+ ```
+
+- Configure transport type for the client
+
+ Open `https://docsupdatetracker.net/index.html` under folder `wwwroot` and find the code when `connection` is created, update it to specify the transport type.
+
+ For example, to specify the connection to use server-sent-events or long polling, update the code to:
+
+ ```javascript
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(
+ "/chat",
+ signalR.HttpTransportType.ServerSentEvents |
+ signalR.HttpTransportType.LongPolling
+ )
+ .build();
+ ```
+
+ To specify the connection to use WebSockets, update the code to:
+
+ ```javascript
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl("/chat", signalR.HttpTransportType.WebSockets)
+ .build();
+ ```
+
+- Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the connection is established through **_APIM1_**
## Next steps
azure-signalr Signalr Howto Work With App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md
# How to use Azure SignalR Service with Azure Application Gateway
-Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following:
+Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following:
-* Protect your applications from common web vulnerabilities.
-* Get application-level load-balancing for your scalable and highly available applications.
-* Set up end-to-end security.
-* Customize the domain name.
+- Protect your applications from common web vulnerabilities.
+- Get application-level load-balancing for your scalable and highly available applications.
+- Set up end-to-end security.
+- Customize the domain name.
-This article contains two parts,
-* [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway.
-* [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway.
+This article contains two parts,
+
+- [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway.
+- [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway.
:::image type="content" source="./media/signalr-howto-work-with-app-gateway/architecture.png" alt-text="Diagram that shows the architecture of using SignalR Service with Application Gateway."::: ## Set up and configure Application Gateway ### Create a SignalR Service instance
-* Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
+
+- Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_**
### Create an Application Gateway instance+ Create from the portal an Application Gateway instance **_AG1_**:
-* On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**.
-* On the **Basics** tab, use these values for the following application gateway settings:
- - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
- - **Application gateway name**: **_AG1_**
- - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers.
- - **Name**: Enter **_VN1_** for the name of the virtual network.
- - **Subnets**: Update the **Subnets** grid with below 2 subnets
-
- | Subnet name | Address range| Note|
- |--|--|--|
- | *myAGSubnet* | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed.
- | *myBackendSubnet* | (another address range) | Subnet for the Azure SignalR instance.
-
- - Accept the default values for the other settings and then select **Next: Frontends**
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab.":::
-
-* On the **Frontends** tab:
- - **Frontend IP address type**: **Public**.
- - Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
- - Select **Next: Backends**
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab.":::
-
-* On the **Backends** tab, select **Add a backend pool**:
- - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool.
- - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net`
- - Select **Next: Configuration**
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service.":::
-
-* On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column:
- - **Rule name**: **_myRoutingRule_**
- - **Priority**: 1
- - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
- - **Listener name**: Enter *myListener* for the name of the listener.
- - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend.
- - **Protocol**: HTTP
- * We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario.
- - Accept the default values for the other settings on the **Listener** tab
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service.":::
- - On the **Backend targets** tab, use the following values:
- * **Target type**: Backend pool
- * **Backend target**: select **signalr** we previously created
- * **Backend settings**: select **Add new** to add a new setting.
- * **Backend settings name**: *mySetting*
- * **Backend protocol**: **HTTPS**
- * **Use well known CA certificate**: **Yes**
- * **Override with new host name**: **Yes**
- * **Host name override**: **Pick host name from backend target**
- * Others keep the default values
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service.":::
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway.":::
-
-* Review and create the **_AG1_**
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance.":::
+
+- On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**.
+- On the **Basics** tab, use these values for the following application gateway settings:
+
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Application gateway name**: **_AG1_**
+ - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers.
+
+ - **Name**: Enter **_VN1_** for the name of the virtual network.
+ - **Subnets**: Update the **Subnets** grid with below 2 subnets
+
+ | Subnet name | Address range | Note |
+ | -- | -- | -- |
+ | _myAGSubnet_ | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed. |
+ | _myBackendSubnet_ | (another address range) | Subnet for the Azure SignalR instance. |
+
+ - Accept the default values for the other settings and then select **Next: Frontends**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab.":::
+
+- On the **Frontends** tab:
+
+ - **Frontend IP address type**: **Public**.
+ - Select **Add new** for the **Public IP address** and enter _myAGPublicIPAddress_ for the public IP address name, and then select **OK**.
+ - Select **Next: Backends**
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab.":::
+
+- On the **Backends** tab, select **Add a backend pool**:
+
+ - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool.
+ - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net`
+ - Select **Next: Configuration**
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service.":::
+
+- On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column:
+
+ - **Rule name**: **_myRoutingRule_**
+ - **Priority**: 1
+ - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener:
+ - **Listener name**: Enter _myListener_ for the name of the listener.
+ - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend.
+ - **Protocol**: HTTP
+ - We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario.
+ - Accept the default values for the other settings on the **Listener** tab
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service.":::
+ - On the **Backend targets** tab, use the following values:
+
+ - **Target type**: Backend pool
+ - **Backend target**: select **signalr** we previously created
+ - **Backend settings**: select **Add new** to add a new setting.
+
+ - **Backend settings name**: _mySetting_
+ - **Backend protocol**: **HTTPS**
+ - **Use well known CA certificate**: **Yes**
+ - **Override with new host name**: **Yes**
+ - **Host name override**: **Pick host name from backend target**
+ - Others keep the default values
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service.":::
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway.":::
+
+- Review and create the **_AG1_**
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance.":::
### Configure Application Gateway health probe
When **_AG1_** is created, go to **Health probes** tab under **Settings** sectio
### Quick test
-* Try with an invalid client request `https://asrs1.service.signalr.net/client` and it returns *400* with error message *'hub' query parameter is required.* It means the request arrived at the SignalR Service and did the request validation.
- ```bash
- curl -v https://asrs1.service.signalr.net/client
- ```
- returns
- ```
- < HTTP/1.1 400 Bad Request
- < ...
- <
- 'hub' query parameter is required.
- ```
-* Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway.":::
-
-* Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns *400* with error message *'hub' query parameter is required.* It means the request successfully went through Application Gateway to SignalR Service and did the request validation.
-
- ```bash
- curl -I http://<frontend-public-IP-address>/client
- ```
- returns
- ```
- < HTTP/1.1 400 Bad Request
- < ...
- <
- 'hub' query parameter is required.
- ```
+- Try with an invalid client request `https://asrs1.service.signalr.net/client` and it returns _400_ with error message _'hub' query parameter is required._ It means the request arrived at the SignalR Service and did the request validation.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
+- Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway.":::
+
+- Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns _400_ with error message _'hub' query parameter is required._ It means the request successfully went through Application Gateway to SignalR Service and did the request validation.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+
+ returns
+
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
### Run chat through Application Gateway Now, the traffic can reach SignalR Service through the Application Gateway. The customer could use the Application Gateway public IP address or custom domain name to access the resource. LetΓÇÖs use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally.
-* First let's get the connection string of **_ASRS1_**
- * On the **Connection strings** tab of **_ASRS1_**
- * **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
- * Copy the Connection string
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint.":::
+- First let's get the connection string of **_ASRS1_**
+
+ - On the **Connection strings** tab of **_ASRS1_**
+ - **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section.
+ - Copy the Connection string
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint.":::
+
+- Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
+- Go to samples/Chatroom folder:
+- Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
-* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples
-* Go to samples/Chatroom folder:
-* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString.
+ ```bash
+ cd samples/Chatroom
+ dotnet restore
+ dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
+ dotnet run
+ ```
- ```bash
- cd samples/Chatroom
- dotnet restore
- dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>"
- dotnet run
- ```
-* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_** 
+- Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_**
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service.":::
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service.":::
## Secure SignalR Service
In this section, let's configure SignalR Service to deny all the traffic from pu
Let's configure SignalR Service to only allow private access. You can find more details in [use private endpoint for SignalR Service](howto-private-endpoints.md).
-* Go to the SignalR Service instance **_ASRS1_** in the portal.
-* Go the **Networking** tab:
- * On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service.":::
-
- * On **Private access** tab, select **+ Private endpoint**:
- * On **Basics** tab:
- * **Name**: **_PE1_**
- * **Network Interface Name**: **_PE1-nic_**
- * **Region**: make sure to choose the same region as your Application Gateway
- * Select **Next: Resources**
- * On **Resources** tab
- * Keep default values
- * Select **Next: Virtual Network**
- * On **Virtual Network** tab
- * **Virtual network**: Select previously created **_VN1_**
- * **Subnet**: Select previously created **_VN1/myBackendSubnet_**
- * Others keep the default settings
- * Select **Next: DNS**
- * On **DNS** tab
- * **Integration with private DNS zone**: **Yes**
- * Review and create the private endpoint
-
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service.":::
-
+- Go to the SignalR Service instance **_ASRS1_** in the portal.
+- Go the **Networking** tab:
+
+ - On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service.":::
+
+ - On **Private access** tab, select **+ Private endpoint**:
+ - On **Basics** tab:
+ - **Name**: **_PE1_**
+ - **Network Interface Name**: **_PE1-nic_**
+ - **Region**: make sure to choose the same region as your Application Gateway
+ - Select **Next: Resources**
+ - On **Resources** tab
+ - Keep default values
+ - Select **Next: Virtual Network**
+ - On **Virtual Network** tab
+ - **Virtual network**: Select previously created **_VN1_**
+ - **Subnet**: Select previously created **_VN1/myBackendSubnet_**
+ - Others keep the default settings
+ - Select **Next: DNS**
+ - On **DNS** tab
+ - **Integration with private DNS zone**: **Yes**
+ - Review and create the private endpoint
+
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service.":::
++ ### Refresh Application Gateway backend pool+ Since Application Gateway was set up before there was a private endpoint for it to use, we need to **refresh** the backend pool for it to look at the Private DNS Zone and figure out that it should route the traffic to the private endpoint instead of the public address. We do the **refresh** by setting the backend FQDN to some other value and then changing it back. Go to the **Backend pools** tab for **_AG1_**, and select **signalr**:
-* Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save**
-* Step2: change Target back to `asrs1.service.signalr.net`
+
+- Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save**
+- Step2: change Target back to `asrs1.service.signalr.net`
### Quick test
-* Now let's visit `https://asrs1.service.signalr.net/client` again. With public access disabled, it returns *403* instead.
- ```bash
- curl -v https://asrs1.service.signalr.net/client
- ```
- returns
- ```
- < HTTP/1.1 403 Forbidden
-* Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns *400* with error message *'hub' query parameter is required*. It means the request successfully went through the Application Gateway to SignalR Service.
-
- ```bash
- curl -I http://<frontend-public-IP-address>/client
- ```
- returns
- ```
- < HTTP/1.1 400 Bad Request
- < ...
- <
- 'hub' query parameter is required.
- ```
+- Now let's visit `https://asrs1.service.signalr.net/client` again. With public access disabled, it returns _403_ instead.
+ ```bash
+ curl -v https://asrs1.service.signalr.net/client
+ ```
+ returns
+ ```
+ < HTTP/1.1 403 Forbidden
+ ```
+- Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns _400_ with error message _'hub' query parameter is required_. It means the request successfully went through the Application Gateway to SignalR Service.
+
+ ```bash
+ curl -I http://<frontend-public-IP-address>/client
+ ```
+
+ returns
+
+ ```
+ < HTTP/1.1 400 Bad Request
+ < ...
+ <
+ 'hub' query parameter is required.
+ ```
Now if you run the Chat application locally again, you'll see error messages `Failed to connect to .... The server returned status code '403' when status code '101' was expected.`, it is because public access is disabled so that localhost server connections are longer able to connect to the SignalR service. Let's deploy the Chat application into the same VNet with **_ASRS1_** so that the chat can talk with **_ASRS1_**.
-### Deploy the chat application to Azure
-* On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
-
-* On the **Basics** tab, use these values for the following application gateway settings:
- - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
- - **Name**: **_WA1_**
- * **Publish**: **Code**
- * **Runtime stack**: **.NET 6 (LTS)**
- * **Operating System**: **Linux**
- * **Region**: Make sure it's the same as what you choose for SignalR Service
- * Select **Next: Docker**
-* On the **Networking** tab
- * **Enable network injection**: select **On**
- * **Virtual Network**: select **_VN1_** we previously created
- * **Enable VNet integration**: **On**
- * **Outbound subnet**: create a new subnet
- * Select **Review + create**
+### Deploy the chat application to Azure
+
+- On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**.
+
+- On the **Basics** tab, use these values for the following application gateway settings:
+ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service
+ - **Name**: **_WA1_**
+ * **Publish**: **Code**
+ * **Runtime stack**: **.NET 6 (LTS)**
+ * **Operating System**: **Linux**
+ * **Region**: Make sure it's the same as what you choose for SignalR Service
+ * Select **Next: Docker**
+- On the **Networking** tab
+ - **Enable network injection**: select **On**
+ - **Virtual Network**: select **_VN1_** we previously created
+ - **Enable VNet integration**: **On**
+ - **Outbound subnet**: create a new subnet
+ - Select **Review + create**
Now let's deploy our chat application to Azure. Below we use Azure CLI to deploy the web app, you can also choose other deployment environments following [publish your web app section](/azure/app-service/quickstart-dotnetcore#publish-your-web-app).
cd publish
zip -r app.zip . # use az CLI to deploy app.zip to our webapp az login
-az account set -s <your-subscription-name-used-to-create-WA1>
-az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
+az account set -s <your-subscription-name-used-to-create-WA1>
+az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip
``` Now the web app is deployed, let's go to the portal for **_WA1_** and make the following updates:
-* On the **Configuration** tab:
- * New application settings:
- | Name | Value |
- | --| |
- |**WEBSITE_DNS_SERVER**| **168.63.129.16** |
- |**WEBSITE_VNET_ROUTE_ALL**| **1**|
+- On the **Configuration** tab:
+
+ - New application settings:
- * New connection string:
+ | Name | Value |
+ | -- | -- |
+ | **WEBSITE_DNS_SERVER** | **168.63.129.16** |
+ | **WEBSITE_VNET_ROUTE_ALL** | **1** |
- | Name | Value | Type|
- | --| ||
- |**Azure__SignalR__ConnectionString**| The copied connection string with ClientEndpoint value| select **Custom**|
+ - New connection string:
+ | Name | Value | Type |
+ | | | -- |
+ | **Azure**SignalR**ConnectionString** | The copied connection string with ClientEndpoint value | select **Custom** |
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string.":::
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string.":::
+- On the **TLS/SSL settings** tab:
-* On the **TLS/SSL settings** tab:
- * **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically.
+ - **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically.
-* Go to the **Overview** tab and get the URL of **_WA1_**.
-* Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
- > [!NOTE]
- >
- > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
+- Go to the **Overview** tab and get the URL of **_WA1_**.
+- Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**.
+ > [!NOTE]
+ >
+ > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically.
- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service.":::
+ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service.":::
## Next steps
azure-signalr Signalr Reference Data Plane Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md
You can find a complete sample of using SignalR Service with Azure Functions at
The following table shows all supported versions of REST API. You can also find the swagger file for each version of REST API.
-API Version | Status | Port | Doc | Spec
-||||
-`20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json)
-`1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json)
-`1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json)
+| API Version | Status | Port | Doc | Spec |
+| - | -- | -- | | |
+| `20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) |
+| `1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) |
+| `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) |
The available APIs are listed as following.
-| API | Path |
-| - | - |
-| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` |
-| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` |
-| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` |
-| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` |
-| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` |
-| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` |
-| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` |
-| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` |
-| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
-| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
-| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
-| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` |
+| API | Path |
+| -- | |
+| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` |
+| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` |
+| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` |
+| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` |
## Using REST API
Use the `AccessKey` in Azure SignalR Service instance's connection string to sig
The following claims are required to be included in the JWT token.
-Claim Type | Is Required | Description
-||
-`aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`.
-`exp` | true | Epoch time when this token expires.
+| Claim Type | Is Required | Description |
+| - | -- | -- |
+| `aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`. |
+| `exp` | true | Epoch time when this token expires. |
-### Authenticate via Azure Active Directory Token (Azure AD Token)
+### Authenticate via Microsoft Entra token
-Similar to authenticating using `AccessKey`, when authenticating using Azure AD Token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+Similar to authenticating using `AccessKey`, when authenticating using Microsoft Entra token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-The difference is, in this scenario, the JWT Token is generated by Azure Active Directory. For more information, see [Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
+The difference is, in this scenario, the JWT Token is generated by Microsoft Entra ID. For more information, see [Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md)
-You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Azure Active Directory for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md)
+You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Microsoft Entra ID for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md)
### Implement Negotiate Endpoint
A typical negotiation response looks as follows:
```json {
- "url":"https://<service_name>.service.signalr.net/client/?hub=<hub_name>",
- "accessToken":"<a typical JWT token>"
+ "url": "https://<service_name>.service.signalr.net/client/?hub=<hub_name>",
+ "accessToken": "<a typical JWT token>"
} ```
Then SignalR Service uses the value of `nameid` claim as the user ID of each cli
You can find a complete console app to demonstrate how to manually build a REST API HTTP request in SignalR Service [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Serverless).
-You can also use [Microsoft.Azure.SignalR.Management](<https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management>) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](<https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management>). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md).
-
+You can also use [Microsoft.Azure.SignalR.Management](https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md).
## Limitation Currently, we have the following limitation for REST API requests:
-* Header size is a maximum of 16 KB.
-* Body size is a maximum of 1 MB.
+- Header size is a maximum of 16 KB.
+- Body size is a maximum of 1 MB.
If you want to send messages larger than 1 MB, use the Management SDK with `persistent` mode.
azure-signalr Signalr Tutorial Authenticate Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md
ms.devlang: javascript + # Tutorial: Azure SignalR Service authentication with Azure Functions A step by step tutorial to build a chat room with authentication and private messaging using Azure Functions, App Service Authentication, and SignalR Service.
A step by step tutorial to build a chat room with authentication and private mes
### Technologies used
-* [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages
-* [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients
-* [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions
+- [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages
+- [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients
+- [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions
### Prerequisites
-* An Azure account with an active subscription.
- * If you don't have one, you can [create one for free](https://azure.microsoft.com/free/).
-* [Node.js](https://nodejs.org/en/download/) (Version 18.x)
-* [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4)
+- An Azure account with an active subscription.
+ - If you don't have one, you can [create one for free](https://azure.microsoft.com/free/).
+- [Node.js](https://nodejs.org/en/download/) (Version 18.x)
+- [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4)
[Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Create essential resources on Azure+ ### Create an Azure SignalR service resource
-Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal.
+Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal.
1. Select on the **Create a resource** (**+**) button for creating a new Azure resource.
Your application will access a SignalR Service instance. Use the following step
1. Enter the following information.
- | Name | Value |
- |||
- | **Resource group** | Create a new resource group with a unique name |
- | **Resource name** | A unique name for the SignalR Service instance |
- | **Region** | Select a region close to you |
- | **Pricing Tier** | Free |
- | **Service mode** | Serverless |
+ | Name | Value |
+ | | - |
+ | **Resource group** | Create a new resource group with a unique name |
+ | **Resource name** | A unique name for the SignalR Service instance |
+ | **Region** | Select a region close to you |
+ | **Pricing Tier** | Free |
+ | **Service mode** | Serverless |
1. Select **Review + Create**. 1. Select **Create**. - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create an Azure Function App and an Azure Storage account
Your application will access a SignalR Service instance. Use the following step
1. Enter the following information.
- | Name | Value |
- |||
- | **Resource group** | Use the same resource group with your SignalR Service instance |
- | **Function App name** | A unique name for the Function app instance |
- | **Runtime stack** | Node.js |
- | **Region** | Select a region close to you |
+ | Name | Value |
+ | | -- |
+ | **Resource group** | Use the same resource group with your SignalR Service instance |
+ | **Function App name** | A unique name for the Function app instance |
+ | **Runtime stack** | Node.js |
+ | **Region** | Select a region close to you |
1. By default, a new Azure Storage account will also be created in the same resource group together with your function app. If you want to use another storage account in the function app, switch to **Hosting** tab to choose an account. 1. Select **Review + Create**, then select **Create**. ## Create an Azure Functions project locally+ ### Initialize a function app
-1. From a command line, create a root folder for your project and change to the folder.
+1. From a command line, create a root folder for your project and change to the folder.
1. Execute the following command in your terminal to create a new JavaScript Functions project.
-```
+
+```bash
func init --worker-runtime node --language javascript --name my-app ```
-By default, the generated project includes a *host.json* file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles).
+
+By default, the generated project includes a _host.json_ file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles).
### Configure application settings
-When running and debugging the Azure Functions runtime locally, application settings are read by the function app from *local.settings.json*. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier.
+When running and debugging the Azure Functions runtime locally, application settings are read by the function app from _local.settings.json_. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier.
-1. Replace the content of *local.settings.json* with the following code:
+1. Replace the content of _local.settings.json_ with the following code:
- ```json
- {
- "IsEncrypted": false,
- "Values": {
- "FUNCTIONS_WORKER_RUNTIME": "node",
- "AzureWebJobsStorage": "<your-storage-account-connection-string>",
- "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>"
- }
- }
- ```
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "FUNCTIONS_WORKER_RUNTIME": "node",
+ "AzureWebJobsStorage": "<your-storage-account-connection-string>",
+ "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>"
+ }
+ }
+ ```
- * Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting.
+ - Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting.
- Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string.
+ Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string.
- * Enter the storage account connection string into the `AzureWebJobsStorage` setting.
+ - Enter the storage account connection string into the `AzureWebJobsStorage` setting.
Navigate to your storage account in the Azure portal. In the **Security + networking** section, locate the **Access keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create a function to authenticate users to SignalR Service
When the chat app first opens in the browser, it requires valid connection crede
> This function must be named `negotiate` as the SignalR client requires an endpoint that ends in `/negotiate`. 1. From the root project folder, create the `negotiate` function from a built-in template with the following command.
- ```bash
- func new --template "SignalR negotiate HTTP trigger" --name negotiate
- ```
-1. Open *negotiate/function.json* to view the function binding configuration.
+ ```bash
+ func new --template "SignalR negotiate HTTP trigger" --name negotiate
+ ```
+
+1. Open _negotiate/function.json_ to view the function binding configuration.
The function contains an HTTP trigger binding to receive requests from SignalR clients and a SignalR input binding to generate valid credentials for a client to connect to an Azure SignalR Service hub named `default`.
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "methods": ["post"],
- "name": "req",
- "route": "negotiate"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "default",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "in"
- }
- ]
- }
- ```
-
- There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure.
-
-1. Close the *negotiate/function.json* file.
----
-1. Open *negotiate/index.js* to view the body of the function.
-
- ```javascript
- module.exports = async function (context, req, connectionInfo) {
- context.res.body = connectionInfo;
- };
- ```
-
- This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance.
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": ["post"],
+ "name": "req",
+ "route": "negotiate"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "default",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+
+ There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure.
+
+1. Close the _negotiate/function.json_ file.
+
+1. Open _negotiate/index.js_ to view the body of the function.
+
+ ```javascript
+ module.exports = async function (context, req, connectionInfo) {
+ context.res.body = connectionInfo;
+ };
+ ```
+
+ This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance.
[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
When the chat app first opens in the browser, it requires valid connection crede
The web app also requires an HTTP API to send chat messages. You'll create an HTTP triggered function named `sendMessage` that sends messages to all connected clients using SignalR Service. 1. From the root project folder, create an HTTP trigger function named `sendMessage` from the template with the command:
- ```bash
- func new --name sendMessage --template "Http trigger"
- ```
-
-1. To configure bindings for the function, replace the content of *sendMessage/function.json* with the following code:
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "route": "messages",
- "methods": ["post"]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalR",
- "name": "$return",
- "hubName": "default",
- "direction": "out"
- }
- ]
- }
- ```
- Two changes are made to the original file:
- * Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method.
- * Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`.
-
-1. Replace the content of *sendMessage/index.js* with the following code:
-
- ```javascript
- module.exports = async function (context, req) {
- const message = req.body;
- message.sender = req.headers && req.headers['x-ms-client-principal-name'] || '';
-
- let recipientUserId = '';
- if (message.recipient) {
- recipientUserId = message.recipient;
- message.isPrivate = true;
- }
-
- return {
- 'userId': recipientUserId,
- 'target': 'newMessage',
- 'arguments': [ message ]
- };
- };
- ```
-
- This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client.
-
- The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial.
+
+ ```bash
+ func new --name sendMessage --template "Http trigger"
+ ```
+
+1. To configure bindings for the function, replace the content of _sendMessage/function.json_ with the following code:
+
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "route": "messages",
+ "methods": ["post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalR",
+ "name": "$return",
+ "hubName": "default",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+ Two changes are made to the original file:
+
+ - Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method.
+ - Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`.
+
+1. Replace the content of _sendMessage/index.js_ with the following code:
+
+ ```javascript
+ module.exports = async function (context, req) {
+ const message = req.body;
+ message.sender =
+ (req.headers && req.headers["x-ms-client-principal-name"]) || "";
+
+ let recipientUserId = "";
+ if (message.recipient) {
+ recipientUserId = message.recipient;
+ message.isPrivate = true;
+ }
+
+ return {
+ userId: recipientUserId,
+ target: "newMessage",
+ arguments: [message],
+ };
+ };
+ ```
+
+ This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client.
+
+ The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial.
1. Save the file.
The web app also requires an HTTP API to send chat messages. You'll create an HT
The chat application's UI is a simple single-page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). For simplicity, the function app hosts the web page. In a production environment, you can use [Static Web Apps](https://azure.microsoft.com/products/app-service/static) to host the web page.
-1. Create a new folder named *content* in the root directory of your function project.
-1. In the *content* folder, create a new file named *https://docsupdatetracker.net/index.html*.
+1. Create a new folder named _content_ in the root directory of your function project.
+1. In the _content_ folder, create a new file named _https://docsupdatetracker.net/index.html_.
1. Copy and paste the content of [https://docsupdatetracker.net/index.html](https://github.com/aspnet/AzureSignalR-samples/blob/da0aca70f490f3d8f4c220d0c88466b6048ebf65/samples/ServerlessChatWithAuth/content/https://docsupdatetracker.net/index.html) to your file. Save the file. 1. From the root project folder, create an HTTP trigger function named `index` from the template with the command:
- ```bash
- func new --name index --template "Http trigger"
- ```
+
+ ```bash
+ func new --name index --template "Http trigger"
+ ```
1. Modify the content of `index/index.js` to the following:
- ```js
- const fs = require('fs');
-
- module.exports = async function (context, req) {
- const fileContent = fs.readFileSync('content/https://docsupdatetracker.net/index.html', 'utf8');
-
- context.res = {
- // status: 200, /* Defaults to 200 */
- body: fileContent,
- headers: {
- 'Content-Type': 'text/html'
- },
- };
- }
- ```
- The function reads the static web page and returns it to the user.
-
-1. Open *index/function.json*, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this:
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": ["get", "post"]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+
+ ```js
+ const fs = require("fs");
+
+ module.exports = async function (context, req) {
+ const fileContent = fs.readFileSync("content/https://docsupdatetracker.net/index.html", "utf8");
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: fileContent,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ };
+ };
+ ```
+
+ The function reads the static web page and returns it to the user.
+
+1. Open _index/function.json_, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this:
+
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
1. Now you can test your app locally. Start the function app with the command:
- ```bash
- func start
- ```
+
+ ```bash
+ func start
+ ```
1. Open **http://localhost:7071/api/index** in your web browser. You should be able to see a web page as follows:
- :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface.":::
+ :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface.":::
1. Enter a message in the chat box and press enter. The message is displayed on the web page. Because the user name of the SignalR client isn't set, we send all messages as "anonymous". - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Deploy to Azure and enable authentication
You have been running the function app and chat application locally. You'll now
So far, the chat app works anonymously. In Azure, you'll use [App Service Authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user is passed to the `SignalRConnectionInfo` binding to generate connection information authenticated as the user.
-1. Open *negotiate/function.json*.
+1. Open _negotiate/function.json_.
1. Insert a `userId` property to the `SignalRConnectionInfo` binding with value `{headers.x-ms-client-principal-name}`. This value is a [binding expression](../azure-functions/functions-triggers-bindings.md) that sets the user name of the SignalR client to the name of the authenticated user. The binding should now look like this.
- ```json
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "userId": "{headers.x-ms-client-principal-name}",
- "hubName": "default",
- "direction": "in"
- }
- ```
+ ```json
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "userId": "{headers.x-ms-client-principal-name}",
+ "hubName": "default",
+ "direction": "in"
+ }
+ ```
1. Save the file. - ### Deploy function app to Azure+ Deploy the function app to Azure with the following command: ```bash func azure functionapp publish <your-function-app-name> --publish-local-settings ```
-The `--publish-local-settings` option publishes your local settings from the *local.settings.json* file to Azure, so you don't need to configure them in Azure again.
-
+The `--publish-local-settings` option publishes your local settings from the _local.settings.json_ file to Azure, so you don't need to configure them in Azure again.
### Enable App Service Authentication
-Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial.
+Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial.
1. Go to the resource page of your function app on Azure portal. 1. Select **Settings** -> **Authentication**.
-1. Select **Add identity provider**.
- :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page.":::
+1. Select **Add identity provider**.
+ :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page.":::
1. Select **Microsoft** from the **Identity provider** list.
- :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page.":::
+ :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page.":::
- Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles:
+ Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles:
- - [Azure Active Directory](../app-service/configure-authentication-provider-aad.md)
- - [Facebook](../app-service/configure-authentication-provider-facebook.md)
- - [Twitter](../app-service/configure-authentication-provider-twitter.md)
- - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md)
- - [Google](../app-service/configure-authentication-provider-google.md)
+ - [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md)
+ - [Facebook](../app-service/configure-authentication-provider-facebook.md)
+ - [Twitter](../app-service/configure-authentication-provider-twitter.md)
+ - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md)
+ - [Google](../app-service/configure-authentication-provider-google.md)
1. Select **Add** to complete the settings. An app registration will be created, which associates your identity provider with your function app.
Congratulations! You've deployed a real-time, serverless chat app!
To clean up the resources created in this tutorial, delete the resource group using the Azure portal.
->[!CAUTION]
+> [!CAUTION]
> Deleting the resource group deletes all resources contained within it. If the resource group contains resources outside the scope of this tutorial, they will also be deleted. [Having issues? Let us know.](https://aka.ms/asrs/qsauth)
To clean up the resources created in this tutorial, delete the resource group us
In this tutorial, you learned how to use Azure Functions with Azure SignalR Service. Read more about building real-time serverless applications with SignalR Service bindings for Azure Functions.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Real-time apps with Azure SignalR Service and Azure Functions](signalr-concept-azure-functions.md) [Having issues? Let us know.](https://aka.ms/asrs/qsauth)
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
Title: Deploy vSAN stretched clusters
description: Learn how to deploy vSAN stretched clusters. Previously updated : 06/24/2023 Last updated : 08/16/2023
It should be noted that these types of failures, although rare, fall outside the
Azure VMware Solution stretched clusters are available in the following regions: - UK South (on AV36) -- West Europe (on AV36)
+- West Europe (on AV36, and AV36P)
- Germany West Central (on AV36) - Australia East (on AV36P)
No. A stretched cluster is created between two availability zones, while the thi
### What are the limitations I should be aware of? - Once a private cloud has been created with a stretched cluster, it can't be changed to a standard cluster private cloud. Similarly, a standard cluster private cloud can't be changed to a stretched cluster private cloud after creation.-- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment.
+- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment. For more details, refer to [Azure subscription and service limits, quotas, and constraints](https://learn.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-vmware-solution-limits).
- Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority. - The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ. - Currently not supported in a stretched cluster environment:
azure-vmware Ecosystem External Storage Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-external-storage-solutions.md
+
+ Title: External storage solutions for Azure VMware Solution (preview)
+description: Learn about external storage solutions for Azure VMware Solution private cloud.
++++ Last updated : 08/07/2023
+
+
+# External storage solutions (preview)
+
+> [!NOTE]
+> By using Pure Cloud Block Store, you agree to the following [Microsoft supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It is advised NOT to run production workloads with preview features.
+
+## External storage solutions for Azure VMware Solution (preview)
+
+Azure VMware Solution is a Hyperconverged Infrastructure (HCI) service that offers VMware vSAN as the primary storage option. However, a significant requirement with on-premises VMware deployments is external storage, especially block storage. Providing the same consistent external block storage architecture in the cloud is crucial for some customers. Some workloads can't be migrated or deployed to the cloud without consistent external block storage. As a key principle of Azure VMware Solution is to enable customers to continue to use their investments and their favorite VMware solutions running on Azure, we engaged storage providers with similar goals.
+
+Pure Cloud Block Store, offered by Pure Storage, is one such solution. It helps bridge the gap by allowing customers to provision external block storage as needed to make full use of an Azure VMware Solution deployment without the need to scale out compute resources, while helping customers migrate their on-premises workloads to Azure. Pure Cloud Block Store is a 100% software-delivered product running entirely on native Azure infrastructure that brings all the relevant Purity features and capabilities to Azure.
+
+## Onboarding and support
+
+During preview, Pure Storage manages onboarding of Pure Cloud Block Store for Azure VMware Solution. You can join the preview by emailing [avs@purestorage.com](mailto:avs@purestorage.com). As Pure Cloud Block Store is a customer deployed and managed solution, please reach out to Pure Storage for Customer Support.
+
+For more information, see the following resources:
+
+- [Azure VMware Solution + CBS Implementation Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Azure_VMware_Solution_and_Cloud_Block_Store_Implementation_Guide)
+- [CBS Deployment Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Pure_Cloud_Block_Store_on_Azure_Implementation_Guide)
+- [Troubleshooting Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Pure_Cloud_Block_Store_on_Azure_Implementation_Guide)
+- [Videos](https://support.purestorage.com/Pure_Cloud_Block_Store/Azure_VMware_Solution_and_Cloud_Block_Store_Video_Demos)
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Title: Enable Public IP on the NSX-T Data Center Edge for Azure VMware Solution
description: This article shows how to enable internet access for your Azure VMware Solution. Previously updated : 7/6/2023 Last updated : 8/18/2023
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
Title: Enable VMware Cloud Director service with Azure VMware Solution (Public Preview)
+ Title: Enable VMware Cloud Director service with Azure VMware Solution
description: This article explains how to use Azure VMware Solution to enable enterprise customers to use Azure VMware Solution for private clouds underlying resources for virtual datacenters. Last updated 08/30/2022
-# Enable VMware Cloud Director service with Azure VMware Solution (Preview)
+# Enable VMware Cloud Director service with Azure VMware Solution
[VMware Cloud Director service (CDs)](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/getting-started-with-vmware-cloud-director-service/GUID-149EF3CD-700A-4B9F-B58B-8EA5776A7A92.html) with Azure VMware Solution enables enterprise customers to use APIs or the Cloud Director services portal to self-service provision and manage virtual datacenters through multi-tenancy with reduced time and complexity.
VMware Cloud Director Availability can be used to migrate VMware Cloud Director
For more information about VMware Cloud Director Availability, see [VMware Cloud Director Availability | Disaster Recovery & Migration](https://www.vmware.com/products/cloud-director-availability.html) ## FAQs
-**Question**: What are the supported Azure regions for the VMware Cloud Director service?
-**Answer**: This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service.
+### What are the supported Azure regions for the VMware Cloud Director service?
-**Question**: How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions?
+This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service.
+
+### How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions?
+
+[Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html)
+
+### How is VMware Cloud Director service supported?
+
+VMware Cloud director service (CDs) is VMware owned and supported product connected to Azure VMware solution. For any support queries on CDs, please contact VMware support for assistance. Both VMware and Microsoft support teams collaborate as necessary to address and resolve Cloud Director Service issues within Azure VMware Solution.
-**Answer** [Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html)
## Next steps [VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html) [Migration to Azure VMware Solutions with Cloud Director service](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/migration-to-azure-vmware-solution-with-cloud-director-service.pdf)++
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The following table provides a detailed list of roles and responsibilities betwe
| -- | - | | Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX | | Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
-| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
+| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy - VMware Cloud director service (CDs), VMware Cloud director availability(VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
## Next steps
The next step is to learn key [private cloud and cluster concepts](concepts-priv
<!-- LINKS - external --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md--
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
description: Learn how to rotate the vCenter Server credentials for your Azure V
Previously updated : 12/22/2022
-#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials.
Last updated : 8/16/2023
+# Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials.
# Rotate the cloudadmin credentials for Azure VMware Solution
->[!IMPORTANT]
->Currently, rotating your NSX-T Manager *cloudadmin* credentials isn't supported. To rotate your NSX-T Manager password, submit a [support request](https://rc.portal.azure.com/#create/Microsoft.Support). This process might impact running HCX services.
-In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
+In this article, you'll rotate the cloudadmin credentials (vCenter Server and NSX-T *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
>[!CAUTION]
->If you use your cloudadmin credentials to connect services to vCenter Server in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
+>If you use your cloudadmin credentials to connect services to vCenter Server or NSX-T in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
## Prerequisites
-Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
+Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* or NSX-T as cloudadmin before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter Server CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
-Instead of using the cloudadmin user to connect services to vCenter Server, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
+Instead of using the cloudadmin user to connect services to vCenter Server or NSX-T Data Center, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
## Reset your vCenter Server credentials ### [Portal](#tab/azure-portal)
-1. In your Azure VMware Solution private cloud, select **VMWare credentials**.
-1. Select **Generate new password**.
+1. In your Azure VMware Solution private cloud, select **VMware credentials**.
+1. Select **Generate new password** under vCenter Server credentials.
1. Select the confirmation checkbox and then select **Generate password**.
To begin using Azure CLI:
``` -----
-
--
-
+
-
-## Update HCX Connector
+### Update HCX Connector
1. Go to the on-premises HCX Connector at https://{ip of the HCX connector appliance}:443 and sign in using the new credentials.
To begin using Azure CLI:
4. Provide the new vCenter Server user credentials and select **Edit**, which saves the credentials. Save should show successful.
+## Reset your NSX-T Manager credentials
+
+1. In your Azure VMware Solution private cloud, select **VMware credentials**.
+1. Select **Generate new password** under NSX-T Manager credentials.
+1. Select the confirmation checkbox and then select **Generate password**.
+ ## Next steps
-Now that you've covered resetting your vCenter Server credentials for Azure VMware Solution, you may want to learn about:
+Now that you've covered resetting your vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about:
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md) -
azure-web-pubsub Concept Azure Ad Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-azure-ad-authorization.md
Title: Authorize access with Azure Active Directory for Azure Web PubSub
-description: This article provides information on authorizing access to Azure Web PubSub Service resources using Azure Active Directory.
+ Title: Authorize access with Microsoft Entra ID for Azure Web PubSub
+description: This article provides information on authorizing access to Azure Web PubSub Service resources using Microsoft Entra ID.
-# Authorize access to Web PubSub resources using Azure Active Directory
+# Authorize access to Web PubSub resources using Microsoft Entra ID
-The Azure Web PubSub Service allows for the authorization of requests to Web PubSub resources by using Azure Active Directory (Azure AD).
+The Azure Web PubSub Service enables the authorization of requests to Azure Web PubSub resources by utilizing Microsoft Entra ID.
-By utilizing role-based access control (RBAC) within Azure AD, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Azure AD authenticates this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request.
+By utilizing role-based access control (RBAC) with Microsoft Entra ID, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Microsoft Entra authorizes this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request.
-Using Azure AD for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Azure AD authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges.
+Using Microsoft Entra ID for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Microsoft Entra ID authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges.
<a id="security-principal"></a>
-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.*
+_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._
-## Overview of Azure AD for Web PubSub
+## Overview of Microsoft Entra ID for Web PubSub
-Authentication is necessary to access a Web PubSub resource when using Azure AD. This authentication involves two steps:
+Authentication is necessary to access a Web PubSub resource when using Microsoft Entra ID. This authentication involves two steps:
1. First, Azure authenticates the security principal and issues an OAuth 2.0 token. 2. Second, the token is added to the request to the Web PubSub resource. The Web PubSub service uses the token to check if the service principal has the access to the resource.
-### Client-side authentication while using Azure AD
+### Client-side authentication while using Microsoft Entra ID
The negotiation server/Function App shares an access key with the Web PubSub resource, enabling the Web PubSub service to authenticate client connection requests using client tokens generated by the access key.
-However, access key is often disabled when using Azure AD to improve security.
+However, access key is often disabled when using Microsoft Entra ID to improve security.
To address this issue, we have developed a REST API that generates a client token. This token can be used to connect to the Azure Web PubSub service.
-To use this API, the negotiation server must first obtain an **Azure AD Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Azure AD Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service.
+To use this API, the negotiation server must first obtain an **Microsoft Entra Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Microsoft Entra Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service.
We provided helper functions (for example `GenerateClientAccessUri) for supported programming languages. ## Assign Azure roles for access rights
-Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure Web PubSub defines a set of Azure built-in roles that encompass common sets of permissions used to access Web PubSub resources. You can also define custom roles for access to Web PubSub resources.
+Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure Web PubSub defines a set of Azure built-in roles that encompass common sets of permissions used to access Web PubSub resources. You can also define custom roles for access to Web PubSub resources.
### Resource scope
Before assigning an Azure RBAC role to a security principal, it's important to i
You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope: -- **An individual resource.**
+- **An individual resource.**
At this scope, a role assignment applies to only the target resource. -- **A resource group.**
+- **A resource group.**
At this scope, a role assignment applies to all of the resources in the resource group.
You can scope access to Azure SignalR resources at the following levels, beginni
At this scope, a role assignment applies to all of the resources in all of the resource groups in the subscription. -- **A management group.**
+- **A management group.**
At this scope, a role assignment applies to all of the resources in all of the resource groups in all of the subscriptions in the management group.
-## Azure built-in roles for Web PubSub resources.
+## Azure built-in roles for Web PubSub resources
- `Web PubSub Service Owner`
- Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
+ Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
- This role is the most common used for building an upstream server.
+ This role is the most common used for building an upstream server.
- `Web PubSub Service Reader`
- Use to grant read-only REST APIs permissions to Web PubSub resources.
+ Use to grant read-only REST APIs permissions to Web PubSub resources.
- It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
+ It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
## Next steps
-To learn how to create an Azure application and use Azure AD auth, see
-- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)
+To learn how to create an Azure application and use Microsoft Entra authorization, see
-To learn how to configure a managed identity and use Azure AD auth, see
-- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from applications](howto-authorize-from-application.md)
+
+To learn how to configure a managed identity and use Microsoft Entra ID auth, see
+
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities](howto-authorize-from-managed-identity.md)
+
+To learn more about roles and role assignments, see
-To learn more about roles and role assignments, see
- [What is Azure role-based access control](../role-based-access-control/overview.md)
-To learn how to create custom roles, see
+To learn how to create custom roles, see
+ - [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role)
-To learn how to use only Azure AD authentication, see
-- [Disable local authentication](./howto-disable-local-auth.md)
+To learn how to use only Microsoft Entra authorization, see
+
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Concept Service Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-service-internals.md
Title: Azure Web PubSub service internals
-description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
+description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
Last updated 09/30/2022
-# Azure Web PubSub service internals
+# Azure Web PubSub service internals
Azure Web PubSub Service provides an easy way to publish/subscribe messages using simple [WebSocket](https://tools.ietf.org/html/rfc6455) connections.
Azure Web PubSub Service provides an easy way to publish/subscribe messages usin
- The service manages the WebSocket connections for you. ## Terms
-* **Service**: Azure Web PubSub Service.
+
+- **Service**: Azure Web PubSub Service.
[!INCLUDE [Terms](includes/terms.md)]
Azure Web PubSub Service provides an easy way to publish/subscribe messages usin
![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png) Workflow as shown in the above graph:
-1. A *client* connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol).
+
+1. A _client_ connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol).
2. On different client events, the service invokes the server using **CloudEvents protocol**. [**CloudEvents**](https://github.com/cloudevents/spec/tree/v1.0.1) is a standardized and protocol-agnostic definition of the structure and metadata description of events hosted by the Cloud Native Computing Foundation (CNCF). Detailed implementation of CloudEvents protocol relies on the server role, described in [server protocol](#server-protocol). 3. The Web PubSub server can invoke the service using the REST API to send messages to clients or to manage the connected clients. Details are described in [server protocol](#server-protocol)
Workflow as shown in the above graph:
A client connection connects to the `/client` endpoint of the service using [WebSocket protocol](https://tools.ietf.org/html/rfc6455). The WebSocket protocol provides full-duplex communication channels over a single TCP connection and was standardized by the IETF as RFC 6455 in 2011. Most languages have native support to start WebSocket connections. Our service supports two kinds of clients:+ - One is called [the simple WebSocket client](#the-simple-websocket-client) - The other is called [the PubSub WebSocket client](#the-pubsub-websocket-client) ### The simple WebSocket client+ A simple WebSocket client, as the naming indicates, is a simple WebSocket connection. It can also have its custom subprotocol. For example, in JS, a simple WebSocket client can be created using the following code.+ ```js // simple WebSocket client1
-var client1 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1');
+var client1 = new WebSocket("wss://test.webpubsub.azure.com/client/hubs/hub1");
// simple WebSocket client2 with some custom subprotocol
-var client2 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'custom.subprotocol')
-
+var client2 = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "custom.subprotocol"
+);
``` A simple WebSocket client follows a client<->server architecture, as the below sequence diagram shows: ![Diagram showing the sequence for a client connection.](./media/concept-service-internals/simple-client-sequence.png) - 1. When the client starts a WebSocket handshake, the service tries to invoke the `connect` event handler for WebSocket handshake. Developers can use this handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. 2. When the client is successfully connected, the service invokes a `connected` event handler. It works as a notification and doesn't block the client from sending messages. Developers can use this handler to do data storage and can respond with messages to the client. The service also pushes a `connected` event to all concerning event listeners, if any. 3. When the client sends messages, the service triggers a `message` event to the event handler to handle the messages sent. This event is a general event containing the messages sent in a WebSocket frame. Your code needs to dispatch the messages inside this event handler. If the event handler returns non-successful response code for, the service drops the client connection. The service also pushes a `message` event to all concerning event listeners, if any. If the service can't find any registered servers to receive the messages, the service also drops the connection. 4. When the client disconnects, the service tries to trigger the `disconnected` event to the event handler once it detects the disconnect. The service also pushes a `disconnected` event to all concerning event listeners, if any. #### Scenarios+ These connections can be used in a typical client-server architecture where the client sends messages to the server and the server handles incoming messages using [Event Handlers](#event-handler). It can also be used when customers apply existing [subprotocols](https://www.iana.org/assignments/websocket/websocket.xml) in their application logic. ### The PubSub WebSocket client+ The service also supports a specific subprotocol called `json.webpubsub.azure.v1`, which empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. We call the WebSocket connection with `json.webpubsub.azure.v1` subprotocol a PubSub WebSocket client. For more information, see the [Web PubSub client specification](https://github.com/Azure/azure-webpubsub/blob/main/protocols/client/client-spec.md) on GitHub. For example, in JS, a PubSub WebSocket client can be created using the following code.+ ```js // PubSub WebSocket client
-var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.webpubsub.azure.v1');
+var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "json.webpubsub.azure.v1"
+);
``` A PubSub WebSocket client can:
-* Join a group, for example:
- ```json
- {
- "type": "joinGroup",
- "group": "<group_name>"
- }
- ```
-* Leave a group, for example:
- ```json
- {
- "type": "leaveGroup",
- "group": "<group_name>"
- }
- ```
-* Publish messages to a group, for example:
- ```json
- {
- "type": "sendToGroup",
- "group": "<group_name>",
- "data": { "hello": "world" }
- }
- ```
-* Send custom events to the upstream server, for example:
-
- ```json
- {
- "type": "event",
- "event": "<event_name>",
- "data": { "hello": "world" }
- }
- ```
+
+- Join a group, for example:
+
+ ```json
+ {
+ "type": "joinGroup",
+ "group": "<group_name>"
+ }
+ ```
+
+- Leave a group, for example:
+
+ ```json
+ {
+ "type": "leaveGroup",
+ "group": "<group_name>"
+ }
+ ```
+
+- Publish messages to a group, for example:
+
+ ```json
+ {
+ "type": "sendToGroup",
+ "group": "<group_name>",
+ "data": { "hello": "world" }
+ }
+ ```
+
+- Send custom events to the upstream server, for example:
+
+ ```json
+ {
+ "type": "event",
+ "event": "<event_name>",
+ "data": { "hello": "world" }
+ }
+ ```
[PubSub WebSocket Subprotocol](./reference-json-webpubsub-subprotocol.md) contains the details of the `json.webpubsub.azure.v1` subprotocol.
-You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the *server* is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the *event* the message belongs.
+You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the _server_ is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the _event_ the message belongs.
+
+#### Scenarios
-#### Scenarios:
Such clients can be used when clients want to talk to each other. Messages are sent from `client2` to the service and the service delivers the message directly to `client1` if the clients are authorized to do so. Client1: ```js
-var client1 = new WebSocket("wss://xxx.webpubsub.azure.com/client/hubs/hub1", "json.webpubsub.azure.v1");
-client1.onmessage = e => {
- if (e.data) {
- var message = JSON.parse(e.data);
- if (message.type === "message"
- && message.group === "Group1"){
- // Only print messages from Group1
- console.log(message.data);
- }
+var client1 = new WebSocket(
+ "wss://xxx.webpubsub.azure.com/client/hubs/hub1",
+ "json.webpubsub.azure.v1"
+);
+client1.onmessage = (e) => {
+ if (e.data) {
+ var message = JSON.parse(e.data);
+ if (message.type === "message" && message.group === "Group1") {
+ // Only print messages from Group1
+ console.log(message.data);
}
+ }
};
-client1.onopen = e => {
- client1.send(JSON.stringify({
- type: "joinGroup",
- group: "Group1"
- }));
+client1.onopen = (e) => {
+ client1.send(
+ JSON.stringify({
+ type: "joinGroup",
+ group: "Group1",
+ })
+ );
}; ```
As the above example shows, `client2` sends data directly to `client1` by publis
### Client events summary Client events fall into two categories:
-* Synchronous events (blocking)
- Synchronous events block the client workflow.
- * `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups.
- * `message`: This event is triggered when a client sends a message.
-* Asynchronous events (non-blocking)
- Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail.
- * `connected`: This event is triggered when a client connects to the service successfully.
- * `disconnected`: This event is triggered when a client disconnected with the service.
+
+- Synchronous events (blocking)
+ Synchronous events block the client workflow.
+ - `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups.
+ - `message`: This event is triggered when a client sends a message.
+- Asynchronous events (non-blocking)
+ Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail.
+ - `connected`: This event is triggered when a client connects to the service successfully.
+ - `disconnected`: This event is triggered when a client disconnected with the service.
### Client message limit+ The maximum allowed message size for one WebSocket frame is **1MB**. ### Client authentication
The following graph describes the workflow.
![Diagram showing the client authentication workflow.](./media/concept-service-internals/client-connect-workflow.png)
-As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's *authorized* to. The `role`s of the client determines the *initial* permissions the client have:
+As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's _authorized_ to. The `role`s of the client determines the _initial_ permissions the client have:
-| Role | Permission |
-|||
-| Not specified | The client can send events.
-| `webpubsub.joinLeaveGroup` | The client can join/leave any group.
-| `webpubsub.sendToGroup` | The client can publish messages to any group.
-| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`.
-| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`.
+| Role | Permission |
+| - | |
+| Not specified | The client can send events. |
+| `webpubsub.joinLeaveGroup` | The client can join/leave any group. |
+| `webpubsub.sendToGroup` | The client can publish messages to any group. |
+| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`. |
+| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. |
The server-side can also grant or revoke permissions of the client dynamically through [server protocol](#connection-manager) as to be illustrated in a later section.
The server-side can also grant or revoke permissions of the client dynamically t
Server protocol provides the functionality for the server to handle client events and manage the client connections and the groups. In general, server protocol contains three roles:+ 1. [Event handler](#event-handler) 1. [Connection manager](#connection-manager) 1. [Event listener](#event-listener) ### Event handler+ The event handler handles the incoming client events. Event handlers are registered and configured in the service through the portal or Azure CLI. When a client event is triggered, the service can identify if the event is to be handled or not. Now we use `PUSH` mode to invoke the event handler. The event handler on the server side exposes a publicly accessible endpoint for the service to invoke when the event is triggered. It acts as a **webhook**. Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
When doing the validation, the `{event}` parameter is resolved to `validate`. Fo
For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback).
-#### Authentication between service and webhook
+#### Authentication/Authorization between service and webhook
+ - Anonymous mode - Simple authentication that `code` is provided through the configured Webhook URL.-- Use Azure Active Directory (Azure AD) authentication. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.
- - Step1: Enable Identity for the Web PubSub service
- - Step2: Select from existing Azure AD application that stands for your webhook web app
+- Use Microsoft Entra authorization. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.
+ - Step1: Enable Identity for the Web PubSub service
+ - Step2: Select from existing Microsoft Entra application that stands for your webhook web app
### Connection manager
-The server is by nature an authorized user. With the help of the *event handler role*, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
- - Close a client connection
- - Send messages to a client
- - Send messages to clients that belong to the same user
- - Add a client to a group
- - Add clients authenticated as the same user to a group
- - Remove a client from a group
- - Remove clients authenticated as the same user from a group
- - Publish messages to a group
+The server is by nature an authorized user. With the help of the _event handler role_, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
+
+- Close a client connection
+- Send messages to a client
+- Send messages to clients that belong to the same user
+- Add a client to a group
+- Add clients authenticated as the same user to a group
+- Remove a client from a group
+- Remove clients authenticated as the same user from a group
+- Publish messages to a group
It can also grant or revoke publish/join permissions for a PubSub client:
- - Grant publish/join permissions to some specific group or to all groups
- - Revoke publish/join permissions for some specific group or for all groups
- - Check if the client has permission to join or publish to some specific group or to all groups
+
+- Grant publish/join permissions to some specific group or to all groups
+- Revoke publish/join permissions for some specific group or for all groups
+- Check if the client has permission to join or publish to some specific group or to all groups
The service provides REST APIs for the server to do connection management.
You can combine an [event handler](#event-handler) and event listeners for the s
Web PubSub service delivers client events to event listeners using [CloudEvents AMQP extension for Azure Web PubSub](reference-cloud-events-amqp.md). ### Summary
-You may have noticed that the *event handler role* handles communication from the service to the server while *the manager role* handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol.
+
+You may have noticed that the _event handler role_ handles communication from the service to the server while _the manager role_ handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol.
![Diagram showing the Web PubSub service bi-directional workflow.](./media/concept-service-internals/http-service-server.png)
azure-web-pubsub Howto Authorize From Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md
Title: Authorize request to Web PubSub resources with Azure AD from Azure applications
-description: This article provides information about authorizing request to Web PubSub resources with Azure AD from Azure applications
+ Title: Authorize request to Web PubSub resources with Microsoft Entra ID from applications
+description: This article provides information about authorizing request to Web PubSub resources with Microsoft Entra ID from applications
-# Authorize request to Web PubSub resources with Azure AD from Azure applications
+# Authorize request to Web PubSub resources with Microsoft Entra ID from Azure applications
-Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md).
+Azure Web PubSub Service supports Microsoft Entra ID for authorizing requests from [applications](../active-directory/develop/app-objects-and-service-principals.md).
This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from an Azure application.
This article shows how to configure your Web PubSub resource and codes to author
The first step is to register an Azure application.
-1. On the [Azure portal](https://portal.azure.com/), search for and select **Azure Active Directory**
+1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID**
2. Under **Manage** section, select **App registrations**. 3. Click **New registration**.
- ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
+ ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Click **Register** to confirm the register.
Once you have your application registered, you can find the **Application (clien
![Screenshot of an application.](./media/howto-authorize-from-application/application-overview.png) To learn more about registering an application, see+ - [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). ## Add credentials
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, click **New client secret**.
-![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
+ ![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**.
-1. Copy the value of the **client secret** and then paste it to a secure location.
- > [!NOTE]
- > The secret will display only once.
+1. Copy the value of the **client secret** and then paste it to a secure location.
+ > [!NOTE]
+ > The secret will display only once.
+ ### Certificate You can also upload a certification instead of creating a client secret.
To learn more about adding credentials, see
## Add role assignments on Azure portal
-This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource.
+This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource.
-> [!Note]
+> [!NOTE]
> A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. On the [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub.
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
1. Click **Select Members**
-3. Search for and select the application that you would like to assign the role to.
+1. Search for and select the application that you would like to assign the role to.
1. Click **Select** to confirm the selection.
-4. Click **Next**.
+1. Click **Next**.
![Screenshot of assigning role to service principals.](./media/howto-authorize-from-application/assign-role-to-service-principals.png)
-5. Click **Review + assign** to confirm the change.
+1. Click **Review + assign** to confirm the change.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
-To learn more about how to assign and manage Azure role assignments, see these articles:
+> To learn more about how to assign and manage Azure role assignments, see these articles:
+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md) - [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
-## Use Postman to get the Azure AD token
+## Use Postman to get the Microsoft Entra token
+ 1. Launch Postman 2. For the method, select **GET**.
To learn more about how to assign and manage Azure role assignments, see these a
4. On the **Headers** tab, add **Content-Type** key and `application/x-www-form-urlencoded` for the value.
-![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png)
+ ![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png)
5. Switch to the **Body** tab, and add the following keys and values.
- 1. Select **x-www-form-urlencoded**.
- 2. Add `grant_type` key, and type `client_credentials` for the value.
- 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
- 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
- 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
+ 1. Select **x-www-form-urlencoded**.
+ 2. Add `grant_type` key, and type `client_credentials` for the value.
+ 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
+ 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
+ 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
-![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png)
+ ![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png)
-6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
+6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
-![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png)
+ ![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png)
-## Sample codes using Azure AD auth
+## Sample codes using Microsoft Entra authorization
We officially support 4 programming languages:
We officially support 4 programming languages:
See the following related articles: -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)
+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md)
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities](howto-authorize-from-managed-identity.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Authorize From Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-managed-identity.md
Title: Authorize request to Web PubSub resources with Azure AD from managed identities
-description: This article provides information about authorizing request to Web PubSub resources with Azure AD from managed identities
+ Title: Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities
+description: This article provides information about authorizing request to Web PubSub resources with Microsoft Entra ID from managed identities
-# Authorize request to Web PubSub resources with Azure AD from managed identities
-Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+# Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities
+
+Azure Web PubSub Service supports Microsoft Entra ID for authorizing requests from [managed identities](../active-directory/managed-identities-azure-resources/overview.md).
This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from a managed identity.
This is an example for configuring `System-assigned managed identity` on a `Virt
1. Click the **Save** button to confirm the change. ### How to create user-assigned managed identities+ - [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) ### How to configure managed identities on other platforms
This is an example for configuring `System-assigned managed identity` on a `Virt
- [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
-## Add role assignments on Azure portal
+## Add role assignments on Azure portal
-This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource.
+This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource.
> [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. Open [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub.
This sample shows how to assign a `Web PubSub Service Owner` role to a system-as
1. Click **Select** to confirm the selection.
-2. Click **Next**.
+1. Click **Next**.
![Screenshot of assigning role to managed identities.](./media/howto-authorize-from-managed-identity/assign-role-to-managed-identities.png)
-3. Click **Review + assign** to confirm the change.
+1. Click **Review + assign** to confirm the change.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
-To learn more about how to assign and manage Azure role assignments, see these articles:
+> To learn more about how to assign and manage Azure role assignments, see these articles:
+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
We officially support 4 programming languages:
See the following related articles: -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)
+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md)
+- [Authorize request to Web PubSub resources with Microsoft Entra ID from Azure applications](howto-authorize-from-application.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Create Serviceclient With Java And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-java-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with Java and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` with Java and Azure Identity.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in Java.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` with Java a
1. Create a `TokenCredential` with Azure Identity SDK.
- ```java
- package com.webpubsub.tutorial;
+ ```java
+ package com.webpubsub.tutorial;
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.core.credential.TokenCredential;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
- public class App {
+ public class App {
- public static void main(String[] args) {
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- }
- }
- ```
+ public static void main(String[] args) {
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ }
+ }
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```Java
- package com.webpubsub.tutorial;
+ ```Java
+ package com.webpubsub.tutorial;
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.DefaultAzureCredentialBuilder;
- import com.azure.messaging.webpubsub.WebPubSubServiceClient;
- import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder;
+ import com.azure.core.credential.TokenCredential;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClient;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder;
- public class App {
- public static void main(String[] args) {
+ public class App {
+ public static void main(String[] args) {
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- // create the service client
- WebPubSubServiceClient client = new WebPubSubServiceClientBuilder()
- .endpoint("<endpoint>")
- .credential(credential)
- .hub("<hub>")
- .buildClient();
- }
- }
- ```
+ // create the service client
+ WebPubSubServiceClient client = new WebPubSubServiceClientBuilder()
+ .endpoint("<endpoint>")
+ .credential(credential)
+ .hub("<hub>")
+ .buildClient();
+ }
+ }
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp-aad)
azure-web-pubsub Howto Create Serviceclient With Javascript And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-javascript-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with JavaScript and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in JavaScript.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in JavaScript.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK.
- ```javascript
- const { DefaultAzureCredential } = require('@azure/identity')
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
- let credential = new DefaultAzureCredential();
- ```
+ let credential = new DefaultAzureCredential();
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```javascript
- const { DefaultAzureCredential } = require('@azure/identity')
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
- let credential = new DefaultAzureCredential();
+ let credential = new DefaultAzureCredential();
- let serviceClient = new WebPubSubServiceClient("<endpoint>", credential, "<hub>");
- ```
+ let serviceClient = new WebPubSubServiceClient(
+ "<endpoint>",
+ credential,
+ "<hub>"
+ );
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp-aad)
azure-web-pubsub Howto Create Serviceclient With Net And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-net-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with .NET and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in .NET.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in .NET.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
- Install [Azure.Messaging.WebPubSub](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) from nuget.org ```bash
- Install-Package Azure.Messaging.WebPubSub
+ Install-Package Azure.Messaging.WebPubSub
``` ## Sample codes 1. Create a `TokenCredential` with Azure Identity SDK.
- ```C#
- using Azure.Identity;
-
- namespace chatapp
- {
- public class Program
- {
- public static void Main(string[] args)
- {
- var credential = new DefaultAzureCredential();
- }
- }
- }
- ```
-
- `credential` can be any class that inherits from `TokenCredential` class.
-
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
-
- To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
-
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
-
- ```C#
- using Azure.Identity;
- using Azure.Messaging.WebPubSub;
-
- public class Program
- {
- public static void Main(string[] args)
- {
- var credential = new DefaultAzureCredential();
- var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
- }
- }
- ```
-
- Or inject it into `IServiceCollections` with our `BuilderExtensions`.
-
- ```C#
- using System;
-
- using Azure.Identity;
-
- using Microsoft.Extensions.Azure;
- using Microsoft.Extensions.Configuration;
- using Microsoft.Extensions.DependencyInjection;
-
- namespace chatapp
- {
- public class Startup
- {
- public Startup(IConfiguration configuration)
- {
- Configuration = configuration;
- }
-
- public IConfiguration Configuration { get; }
-
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddAzureClients(builder =>
- {
- var credential = new DefaultAzureCredential();
- builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
- });
- }
- }
- }
- ```
-
- Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme)
+ ```C#
+ using Azure.Identity;
+
+ namespace chatapp
+ {
+ public class Program
+ {
+ public static void Main(string[] args)
+ {
+ var credential = new DefaultAzureCredential();
+ }
+ }
+ }
+ ```
+
+ `credential` can be any class that inherits from `TokenCredential` class.
+
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
+
+ To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
+
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+
+ ```C#
+ using Azure.Identity;
+ using Azure.Messaging.WebPubSub;
+
+ public class Program
+ {
+ public static void Main(string[] args)
+ {
+ var credential = new DefaultAzureCredential();
+ var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
+ }
+ }
+ ```
+
+ Or inject it into `IServiceCollections` with our `BuilderExtensions`.
+
+ ```C#
+ using System;
+
+ using Azure.Identity;
+
+ using Microsoft.Extensions.Azure;
+ using Microsoft.Extensions.Configuration;
+ using Microsoft.Extensions.DependencyInjection;
+
+ namespace chatapp
+ {
+ public class Startup
+ {
+ public Startup(IConfiguration configuration)
+ {
+ Configuration = configuration;
+ }
+
+ public IConfiguration Configuration { get; }
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddAzureClients(builder =>
+ {
+ var credential = new DefaultAzureCredential();
+ builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
+ });
+ }
+ }
+ }
+ ```
+
+ Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)
azure-web-pubsub Howto Create Serviceclient With Python And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-python-and-azure-identity.md
# How to create a `WebPubSubServiceClient` with Python and Azure Identity
-This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in Python.
+This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in Python.
## Requirements
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK.
- ```python
- from azure.identity import DefaultAzureCredential
+ ```python
+ from azure.identity import DefaultAzureCredential
- credential = DefaultAzureCredential()
- ```
+ credential = DefaultAzureCredential()
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```python
- from azure.identity import DefaultAzureCredential
+ ```python
+ from azure.identity import DefaultAzureCredential
- credential = DefaultAzureCredential()
+ credential = DefaultAzureCredential()
- client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential)
- ```
+ client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential)
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme)
## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp-aad)
+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp-aad)
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
Title: Create an Azure Web PubSub resource
-description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
+description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
Last updated 03/13/2023
zone_pivot_groups: azure-web-pubsub-create-resource-methods + # Create a Web PubSub resource ## Prerequisites+ > [!div class="checklist"]
-> * An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
+>
+> - An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
> [!TIP] > Web PubSub includes a generous **free tier** that can be used for testing and production purposes.
-
-
+ ::: zone pivot="method-azure-portal"+ ## Create a resource from Azure portal
-1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
+1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
- :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
+ :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
2. Select **Web PubSub** from the search results, then select **Create**. 3. Enter the following settings.
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
- | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
- | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
- | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
- | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
- | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
+ | Setting | Suggested value | Description |
+ | -- | -- | |
+ | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
+ | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
+ | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
+ | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
+ | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
+ | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
- :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
+ :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
4. Select **Create** to provision your Web PubSub resource.-
+ ::: zone-end
::: zone pivot="method-azure-cli"+ ## Create a resource using Azure CLI
-The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
+The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
> [!IMPORTANT] > This quickstart requires Azure CLI of version 2.22.0 or higher.
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
[!INCLUDE [Create a Web PubSub instance](includes/cli-awps-creation.md)] ::: zone-end - ::: zone pivot="method-bicep"+ ## Create a resource using Bicep template [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
The template used in this quickstart is from [Azure Quickstart Templates](/sampl
1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
- # [CLI](#tab/CLI)
+ # [CLI](#tab/CLI)
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep
- ```
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
- # [PowerShell](#tab/PowerShell)
+ # [PowerShell](#tab/PowerShell)
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
- ```
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
-
+ ***
- When the deployment finishes, you should see a message indicating the deployment succeeded.
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
## Review deployed resources
Get-AzResource -ResourceGroupName exampleRG
``` + ## Clean up resources When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
az group delete --name exampleRG
```azurepowershell-interactive Remove-AzResourceGroup -Name exampleRG ```+ ::: zone-end ## Next step+ Now that you have created a resource, you are ready to put it to use. Next, you will learn how to subscribe and publish messages among your clients.
-> [!div class="nextstepaction"]
+
+> [!div class="nextstepaction"]
> [PubSub among clients](quickstarts-pubsub-among-clients.md)
azure-web-pubsub Howto Develop Event Listener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-event-listener.md
If you want to listen to your [client events](concept-service-internals.md#terms
This tutorial shows you how to authorize your Web PubSub service to connect to Event Hubs and how to add an event listener rule to your service settings.
-Web PubSub service uses Azure Active Directory (Azure AD) authentication with managed identity to connect to Event Hubs. Therefore, you should enable the managed identity of the service and make sure it has proper permissions to connect to Event Hubs. You can grant the built-in [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) role to the managed identity so that it has enough permissions.
+Web PubSub service uses Microsoft Entra ID with managed identity to connect to Event Hubs. Therefore, you should enable the managed identity of the service and make sure it has proper permissions to connect to Event Hubs. You can grant the built-in [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) role to the managed identity so that it has enough permissions.
To configure an Event Hubs listener, you need to:
-1. [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service)
-2. [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role)
-3. [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings)
+- [Send client events to Event Hubs](#send-client-events-to-event-hubs)
+ - [Overview](#overview)
+ - [Configure an event listener](#configure-an-event-listener)
+ - [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service)
+ - [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role)
+ - [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings)
+ - [Test your configuration with live demo](#test-your-configuration-with-live-demo)
+ - [Next steps](#next-steps)
## Configure an event listener
Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity
### Add an event listener rule to your service settings
-1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
+1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
:::image type="content" source="media/howto-develop-event-listener/web-pubsub-settings.png" alt-text="Screenshot of Web PubSub settings"::: 1. Then in the below editing page, you'd need to configure hub name, and select **Add** to add an event listener.
Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity
1. On the **Configure Event Listener** page, first configure an event hub endpoint. You can select **Select Event Hub from your subscription** to select, or directly input the fully qualified namespace and the event hub name. Then select `user` and `system` events you'd like to listen to. Finally select **Confirm** when everything is done. :::image type="content" source="media/howto-develop-event-listener/configure-event-hub-listener.png" alt-text="Screenshot of configuring Event Hubs Listener"::: - ## Test your configuration with live demo 1. Open this [Event Hubs Consumer Client](https://awpseventlistenerdemo.blob.core.windows.net/eventhub-consumer/https://docsupdatetracker.net/index.html) web app, input the Event Hubs connection string to connect to an event hub as a consumer. If you get the Event Hubs connection string from an Event Hubs namespace resource instead of an event hub instance, then you need to specify the event hub name. This event hub consumer client is connected with the mode that only reads new events; the events published before aren't seen here. You can change the consumer client connection mode to read all the available events in the production environment. 1. Use this [WebSocket Client](https://awpseventlistenerdemo.blob.core.windows.net/webpubsub-client/websocket-client.html) web app to generate client events. If you've configured to send system event `connected` to that event hub, you should be able to see a printed `connected` event in the Event Hubs consumer client after connecting to Web PubSub service successfully. You can also generate a user event with the app.
- :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app":::
- :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="The area of the WebSocket client app to generate a user event":::
+ :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app.":::
+ :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="Screenshot showing the area of the WebSocket client app to generate a user event.":::
## Next steps In this article, you learned how event listeners work and how to configure an event listener with an event hub endpoint. To learn the data format sent to Event Hubs, read the following specification.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Specification: CloudEvents AMQP extension for Azure Web PubSub](./reference-cloud-events-amqp.md)
-<!--TODO: Add demo-->
+
+<!--TODO: Add demo-->
azure-web-pubsub Howto Develop Eventhandler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-eventhandler.md
description: Guidance about event handler concepts and integration introduction
-+ Last updated 01/27/2023 # Event handler in Azure Web PubSub service
-The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**.
+The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**.
The Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
-For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response.
+For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response.
The data sending from the service to the server is always in CloudEvents `binary` format.
For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/
You can use any of these methods to authenticate between the service and webhook. - Anonymous mode-- Simple Auth with `?code=<code>` is provided through the configured Webhook URL as query parameter.-- Azure Active Directory(Azure AD) authentication. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).
+- Simple authentication with `?code=<code>` is provided through the configured Webhook URL as query parameter.
+- Microsoft Entra authorization. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).
## Configure event handler
You can add an event handler to a new hub or edit an existing hub.
To configure an event handler in a new hub:
-1. Go to your Azure Web PubSub service page in the **Azure portal**.
-1. Select **Settings** from the menu.
+1. Go to your Azure Web PubSub service page in the **Azure portal**.
+1. Select **Settings** from the menu.
1. Select **Add** to create a hub and configure your server-side webhook URL. Note: To add an event handler to an existing hub, select the hub and select **Edit**. :::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler."::: 1. Enter your hub name. 1. Select **Add** under **Configure Even Handlers**.
-1. In the event handler page, configure the following fields:
- 1. Enter the server webhook URL in the **URL Template** field.
- 1. Select the **System events** that you want to subscribe to.
- 1. Select the **User events** that you want to subscribe to.
- 1. Select **Authentication** method to authenticate upstream requests.
- 1. Select **Confirm**.
+1. In the event handler page, configure the following fields: 1. Enter the server webhook URL in the **URL Template** field. 1. Select the **System events** that you want to subscribe to. 1. Select the **User events** that you want to subscribe to. 1. Select **Authentication** method to authenticate upstream requests. 1. Select **Confirm**.
+ :::image type="content" source="media/howto-develop-eventhandler/configure-event-handler.png" alt-text="Screenshot of Azure Web PubSub Configure Event Handler.":::
1. Select **Save** at the top of the **Configure Hub Settings** page.
To configure an event handler in a new hub:
Use the Azure CLI [**az webpubsub hub**](/cli/azure/webpubsub/hub) group commands to configure the event handler settings.
-Commands | Description
|--
-`create` | Create hub settings for WebPubSub Service.
-`delete` | Delete hub settings for WebPubSub Service.
-`list` | List all hub settings for WebPubSub Service.
-`show` | Show hub settings for WebPubSub Service.
-`update` | Update hub settings for WebPubSub Service.
+| Commands | Description |
+| -- | -- |
+| `create` | Create hub settings for WebPubSub Service. |
+| `delete` | Delete hub settings for WebPubSub Service. |
+| `list` | List all hub settings for WebPubSub Service. |
+| `show` | Show hub settings for WebPubSub Service. |
+| `update` | Update hub settings for WebPubSub Service. |
Here's an example of creating two webhook URLs for hub `MyHub` of `MyWebPubSub` resource:
azure-web-pubsub Howto Develop Reliable Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md
description: How to create reliable Websocket clients
-+ Last updated 01/12/2023
The Web PubSub service supports two reliable subprotocols `json.reliable.webpubs
The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [PubSub with client SDK](./quickstart-use-client-sdk.md) for quick start. - ## The Hard Way - Implement by hand The following tutorial walks you through the important part of implementing the [Web PubSub client specification](./reference-client-specification.md). This guide is not for people looking for a quick start but who wants to know the principle of achieving reliability. For quick start, please use the Client SDK.
To use reliable subprotocols, you must set the subprotocol when constructing Web
- Use Json reliable subprotocol:
- ```js
- var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1');
- ```
+ ```js
+ var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "json.reliable.webpubsub.azure.v1"
+ );
+ ```
- Use Protobuf reliable subprotocol:
- ```js
- var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1');
- ```
+ ```js
+ var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "protobuf.reliable.webpubsub.azure.v1"
+ );
+ ```
### Connection recovery Connection recovery is the basis of achieving reliability and must be implemented when using the `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1` protocols.
-Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
+Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
-When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
+When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
```json {
- "type":"system",
- "event":"connected",
- "connectionId": "<connection_id>",
- "reconnectionToken": "<reconnection_token>"
+ "type": "system",
+ "event": "connected",
+ "connectionId": "<connection_id>",
+ "reconnectionToken": "<reconnection_token>"
} ```
Connection recovery may fail if the network issue hasn't been recovered yet. The
### Publisher
-Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
+Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
-The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message.
+The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message.
A sample group send message: ```json {
- "type": "sendToGroup",
- "group": "group1",
- "dataType" : "text",
- "data": "text data",
- "ackId": 1
+ "type": "sendToGroup",
+ "group": "group1",
+ "dataType": "text",
+ "data": "text data",
+ "ackId": 1
} ```
A sample ack response:
```json {
- "type": "ack",
- "ackId": 1,
- "success": true
+ "type": "ack",
+ "ackId": 1,
+ "success": true
} ``` When the Web PubSub service returns an ack response with `success: true`, the message has been processed by the service, and the client can expect the message will be delivered to all subscribers.
-When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used.
+When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used.
```json {
- "type": "ack",
- "ackId": 1,
- "success": false,
- "error": {
- "name": "InternalServerError",
- "message": "Internal server error"
- }
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "InternalServerError",
+ "message": "Internal server error"
+ }
} ``` ![Message Failure](./media/howto-develop-reliable-clients/message-failed.png)
-If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
+If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
```json {
- "type": "ack",
- "ackId": 1,
- "success": false,
- "error": {
- "name": "Duplicate",
- "message": "Message with ack-id: 1 has been processed"
- }
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "Duplicate",
+ "message": "Message with ack-id: 1 has been processed"
+ }
} ```
A sample sequence ack:
```json {
- "type": "sequenceAck",
- "sequenceId": 1
+ "type": "sequenceAck",
+ "sequenceId": 1
} ```
azure-web-pubsub Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-disable-local-auth.md
Title: Disable local (access key) authentication with Azure Web PubSub Service
-description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure Web PubSub Service.
+description: This article provides information about how to disable access key authentication and use only Microsoft Entra authorization with Azure Web PubSub Service.
# Disable local (access key) authentication with Azure Web PubSub Service
-There are two ways to authenticate to Azure Web PubSub Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure Web PubSub Service resources when possible.
+There are two ways to authenticate to Azure Web PubSub Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID provides superior security and ease of use over access key. With Microsoft Entra ID, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Microsoft Entra ID with your Azure Web PubSub Service resources when possible.
> [!IMPORTANT] > Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with current set of access keys will become unavailable.
-> - Signature will **NOT** be attached in the upstream request header. Please visit *[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)* to learn how to validate requests via Azure AD token.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with current set of access keys will become unavailable.
+> - Signature will **NOT** be attached in the upstream request header. Please visit _[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)_ to learn how to validate requests via Microsoft Entra token.
## Use Azure portal
You can disable local authentication by setting `disableLocalAuth` property to t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "resource_name": {
- "defaultValue": "test-for-disable-aad",
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.SignalRService/WebPubSub",
- "apiVersion": "2022-08-01-preview",
- "name": "[parameters('resource_name')]",
- "location": "eastus",
- "sku": {
- "name": "Premium_P1",
- "tier": "Premium",
- "size": "P1",
- "capacity": 1
- },
- "properties": {
- "tls": {
- "clientCertEnabled": false
- },
- "networkACLs": {
- "defaultAction": "Deny",
- "publicNetwork": {
- "allow": [
- "ServerConnection",
- "ClientConnection",
- "RESTAPI",
- "Trace"
- ]
- },
- "privateEndpoints": []
- },
- "publicNetworkAccess": "Enabled",
- "disableLocalAuth": true,
- "disableAadAuth": false
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/WebPubSub",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
} ```
You can assign the [Azure Web PubSub Service should have local authentication me
See the following docs to learn about authentication methods. -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)
+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md)
- [Authenticate with Azure applications](./howto-authorize-from-application.md) - [Authenticate with managed identities](./howto-authorize-from-managed-identity.md)
azure-web-pubsub Howto Generate Client Access Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md
# How to generate client access URL for the clients
-A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
+A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
- For quick start, copy one from the Azure portal - For development, generate the value using [Web PubSub server SDK](./reference-server-sdk-js.md)-- If you're using Azure AD, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)
+- If you're using Microsoft Entra ID, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)
## Copy from the Azure portal+ In the Keys tab in Azure portal, there's a Client URL Generator tool to quickly generate a Client Access URL for you, as shown in the following diagram. Values input here aren't stored. :::image type="content" source="./media/howto-websocket-connect/generate-client-url.png" alt-text="Screenshot of the Web PubSub Client URL Generator."::: ## Generate from service SDK+ The same Client Access URL can be generated by using the Web PubSub server SDK. # [JavaScript](#tab/javascript)
The same Client Access URL can be generated by using the Web PubSub server SDK.
1. Follow [Getting started with server SDK](./reference-server-sdk-js.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
- * Configure user ID
- ```js
- let token = await serviceClient.getClientAccessToken({ userId: "user1" });
- ```
- * Configure the lifetime of the token
- ```js
- let token = await serviceClient.getClientAccessToken({ expirationTimeInMinutes: 5 });
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.joinLeaveGroup.group1"] });
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.sendToGroup.group1"] });
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ groups: ["group1"] });
- ```
+
+ - Configure user ID
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({ userId: "user1" });
+ ```
+
+ - Configure the lifetime of the token
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ expirationTimeInMinutes: 5,
+ });
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ roles: ["webpubsub.joinLeaveGroup.group1"],
+ });
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ roles: ["webpubsub.sendToGroup.group1"],
+ });
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ groups: ["group1"],
+ });
+ ```
# [C#](#tab/csharp) 1. Follow [Getting started with server SDK](./reference-server-sdk-csharp.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`:
- * Configure user ID
- ```csharp
- var url = service.GetClientAccessUri(userId: "user1");
- ```
- * Configure the lifetime of the token
- ```csharp
- var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(groups: new string[] { "group1" });
- ```
+
+ - Configure user ID
+
+ ```csharp
+ var url = service.GetClientAccessUri(userId: "user1");
+ ```
+
+ - Configure the lifetime of the token
+
+ ```csharp
+ var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```csharp
+ var url = service.GetClientAccessUri(groups: new string[] { "group1" });
+ ```
# [Python](#tab/python) 1. Follow [Getting started with server SDK](./reference-server-sdk-python.md#install-the-package) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`:
- * Configure user ID
- ```python
- token = service.get_client_access_token(user_id="user1")
- ```
- * Configure the lifetime of the token
- ```python
- token = service.get_client_access_token(minutes_to_expire=5)
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(groups=["group1"])
- ```
+
+ - Configure user ID
+
+ ```python
+ token = service.get_client_access_token(user_id="user1")
+ ```
+
+ - Configure the lifetime of the token
+
+ ```python
+ token = service.get_client_access_token(minutes_to_expire=5)
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```python
+ token = service.get_client_access_token(groups=["group1"])
+ ```
# [Java](#tab/java) 1. Follow [Getting started with server SDK](./reference-server-sdk-java.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
- * Configure user ID
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setUserId(id);
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure the lifetime of the token
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setExpiresAfter(Duration.ofDays(1));
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.addRole("webpubsub.joinLeaveGroup.group1");
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.addRole("webpubsub.sendToGroup.group1");
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setGroups(Arrays.asList("group1")),
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
+
+ - Configure user ID
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setUserId(id);
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure the lifetime of the token
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setExpiresAfter(Duration.ofDays(1));
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.joinLeaveGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.sendToGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setGroups(Arrays.asList("group1")),
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back. ## Invoke the Generate Client Token REST API
-You can enable Azure AD in your service and use the Azure AD token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use.
-
-1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Azure AD.
-2. Follow [Get Azure AD token](./howto-authorize-from-application.md#use-postman-to-get-the-azure-ad-token) to get the Azure AD token with Postman.
-3. Use the Azure AD token to invoke `:generateToken` with Postman:
- 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
- 2. On the **Auth** tab, select **Bearer Token** and paste the Azure AD token fetched in the previous step
- 3. Select **Send** and you see the Client Access Token in the response:
- ```json
- {
- "token": "ABCDEFG.ABC.ABC"
- }
- ```
-4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
+You can enable Microsoft Entra ID in your service and use the Microsoft Entra token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use.
+
+1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Microsoft Entra ID.
+2. Follow [Get Microsoft Entra token](./howto-authorize-from-application.md#use-postman-to-get-the-microsoft-entra-token) to get the Microsoft Entra token with Postman.
+3. Use the Microsoft Entra token to invoke `:generateToken` with Postman:
+ 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
+ 2. On the **Auth** tab, select **Bearer Token** and paste the Microsoft Entra token fetched in the previous step
+ 3. Select **Send** and you see the Client Access Token in the response:
+
+ ```json
+ {
+ "token": "ABCDEFG.ABC.ABC"
+ }
+ ```
+
+4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
azure-web-pubsub Howto Monitor Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-monitor-azure-policy.md
[Azure Policy](../governance/policy/overview.md) is a free service in Azure to create, assign, and manage policies that enforce rules and effects to ensure your resources stay compliant with your corporate standards and service level agreements. Use these policies to audit Web PubSub resources for compliance.
-This article describes the built-in policies for Azure Web PubSub Service.
+This article describes the built-in policies for Azure Web PubSub Service.
## Built-in policy definitions - The following table contains an index of Azure Policy built-in policy definitions for Azure Web PubSub. For Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the Version column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
The name of each built-in policy definition links to the policy definition in th
When assigning a policy definition:
-* You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
-* Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
-* You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
-* Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
+- You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+- Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
+- You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+- Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
> [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
When a resource is non-compliant, there are many possible reasons. To determine
1. Open the Azure portal and search for **Policy**. 1. Select **Policy**. 1. Select **Compliance**.
-1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
- ID.
- [ ![Policy compliance in portal](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
-1. Select a policy to review aggregate compliance details and events.
+1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
+ ID.
+ [ ![Screenshot showing policy compliance in portal.](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
+1. Select a policy to review aggregate compliance details and events.
1. Select a specific Web PubSub for resource compliance. ### Policy compliance in the Azure CLI
az policy state list \
## Next steps
-* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-
-* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-* Learn more about [governance capabilities](../governance/index.yml) in Azure
+- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about [governance capabilities](../governance/index.yml) in Azure
<!-- LINKS - External -->
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
azure-web-pubsub Howto Troubleshoot Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md
description: Learn what resource logs are and how to use them for troubleshootin
-+ Last updated 07/21/2022 # How to troubleshoot with resource logs
-This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
+This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
-## What are resource logs?
+## What are resource logs?
+
+There are three types of resource logs: _Connectivity_, _Messaging_, and _HTTP requests_.
-There are three types of resource logs: *Connectivity*, *Messaging*, and *HTTP requests*.
- **Connectivity** logs provide detailed information for Azure Web PubSub hub connections. For example, basic information (user ID, connection ID, and so on) and event information (connect, disconnect, and so on). - **Messaging** logs provide tracing information for the Azure Web PubSub hub messages received and sent via Azure Web PubSub service. For example, tracing ID and message type of the message. - **HTTP requests** logs provide tracing information for HTTP requests to the Azure Web PubSub service. For example, HTTP method and status code. Typically the HTTP request is recorded when it arrives at or leave from service.
The Azure Web PubSub service live trace tool has ability to collect resource log
> [!NOTE] > The following considerations apply to using the live trace tool:
-> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
-> - The live trace tool does not currently support Azure Active Directory authentication. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
-> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
+>
+> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
+> - The live trace tool does not currently support Microsoft Entra authorization. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
+> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
### Launch the live trace tool
The Azure Web PubSub service live trace tool has ability to collect resource log
1. Select **Save** and then wait until the settings take effect. 1. Select **Open Live Trace Tool**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
### Capture the resource logs The live trace tool provides functionality to help you capture the resource logs for troubleshooting.
-* **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
-* **Clear**: Clear the captured real-time resource logs.
-* **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
-* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
+- **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
+- **Clear**: Clear the captured real-time resource logs.
+- **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
+- **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
:::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/live-trace-tool-capture.png" alt-text="Screenshot of capturing resource logs with live trace tool.":::
-The real-time resource logs captured by live trace tool contain detailed information for troubleshooting.
-
-| Name | Description |
-| | |
-| Time | Log event time |
-| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
-| Event Name | Operation name of the event |
-| Message | Detailed message for the event |
-| Exception | The run-time exception of Azure Web PubSub service |
-| Hub | User-defined hub name |
-| Connection ID | Identity of the connection |
-| User ID | User identity|
-| IP | Client IP address |
-| Route Template | The route template of the API |
-| Http Method | The Http method (POST/GET/PUT/DELETE) |
-| URL | The uniform resource locator |
-| Trace ID | The unique identifier to the invocation |
-| Status Code | The Http response code |
-| Duration | The duration between receiving the request and processing the request |
-| Headers | The additional information passed by the client and the server with an HTTP request or response |
+The real-time resource logs captured by live trace tool contain detailed information for troubleshooting.
+
+| Name | Description |
+| -- | -- |
+| Time | Log event time |
+| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
+| Event Name | Operation name of the event |
+| Message | Detailed message for the event |
+| Exception | The run-time exception of Azure Web PubSub service |
+| Hub | User-defined hub name |
+| Connection ID | Identity of the connection |
+| User ID | User identity |
+| IP | Client IP address |
+| Route Template | The route template of the API |
+| Http Method | The Http method (POST/GET/PUT/DELETE) |
+| URL | The uniform resource locator |
+| Trace ID | The unique identifier to the invocation |
+| Status Code | The Http response code |
+| Duration | The duration between receiving the request and processing the request |
+| Headers | The additional information passed by the client and the server with an HTTP request or response |
## Capture resource logs with Azure Monitor
Currently Azure Web PubSub supports integration with [Azure Storage](../azure-mo
1. Go to Azure portal. 1. On **Diagnostic settings** page of your Azure Web PubSub service instance, select **+ Add diagnostic setting**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one":::
1. In **Diagnostic setting name**, input the setting name. 1. In **Category details**, select any log category you need. 1. In **Destination details**, check **Archive to a storage account**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+ 1. Select **Save** to save the diagnostic setting.
-> [!NOTE]
-> The storage account should be in the same region as Azure Web PubSub service.
+ > [!NOTE]
+ > The storage account should be in the same region as Azure Web PubSub service.
### Archive to an Azure Storage Account
All logs are stored in JavaScript Object Notation (JSON) format. Each entry has
Archive log JSON strings include elements listed in the following tables:
-**Format**
-
-Name | Description
-- | -
-time | Log event time
-level | Log event level
-resourceId | Resource ID of your Azure SignalR Service
-location | Location of your Azure SignalR Service
-category | Category of the log event
-operationName | Operation name of the event
-callerIpAddress | IP address of your server or client
-properties | Detailed properties related to this log event. For more detail, see the properties table below
-
-**Properties Table**
-
-Name | Description
-- | -
-collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
-connectionId | Identity of the connection
-userId | Identity of the user
-message | Detailed message of log event
-hub | User-defined Hub Name |
-routeTemplate | The route template of the API |
-httpMethod | The Http method (POST/GET/PUT/DELETE) |
-url | The uniform resource locator |
-traceId | The unique identifier to the invocation |
-statusCode | The Http response code |
-duration | The duration between the request is received and processed |
-headers | The additional information passed by the client and the server with an HTTP request or response |
+#### Format
+
+| Name | Description |
+| | - |
+| time | Log event time |
+| level | Log event level |
+| resourceId | Resource ID of your Azure SignalR Service |
+| location | Location of your Azure SignalR Service |
+| category | Category of the log event |
+| operationName | Operation name of the event |
+| callerIpAddress | IP address of your server or client |
+| properties | Detailed properties related to this log event. For more detail, see the properties table below |
+
+#### Properties Table
+
+| Name | Description |
+| - | -- |
+| collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` |
+| connectionId | Identity of the connection |
+| userId | Identity of the user |
+| message | Detailed message of log event |
+| hub | User-defined Hub Name |
+| routeTemplate | The route template of the API |
+| httpMethod | The Http method (POST/GET/PUT/DELETE) |
+| url | The uniform resource locator |
+| traceId | The unique identifier to the invocation |
+| statusCode | The Http response code |
+| duration | The duration between the request is received and processed |
+| headers | The additional information passed by the client and the server with an HTTP request or response |
The following code is an example of an archive log JSON string:
The following code is an example of an archive log JSON string:
### Archive to Azure Log Analytics To send logs to a Log Analytics workspace:
-1. On the **Diagnostic setting** page, under **Destination details**, select **Send to Log Analytics workspace.
+
+1. On the **Diagnostic setting** page, under **Destination details**, select \*\*Send to Log Analytics workspace.
1. Select the **Subscription** you want to use. 1. Select the **Log Analytics workspace** to use as the destination for the logs.
To view the resource logs, follow these steps:
1. Select `Logs` in your target Log Analytics.
- :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
+ :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
1. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest`, and then select the time range to query the log. For advanced queries, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md).
- :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
-
+ :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
To use a sample query for SignalR service, follow the steps below.+ 1. Select `Logs` in your target Log Analytics. 1. Select `Queries` to open query explorer. 1. Select `Resource type` to group sample queries in resource type. 1. Select `Run` to run the script.
- :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
-
+ :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
Archive log columns include elements listed in the following table.
-Name | Description
-- | -
-TimeGenerated | Log event time
-Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
-OperationName | Operation name of the event
-Location | Location of your Azure SignalR Service
-Level | Log event level
-CallerIpAddress | IP address of your server/client
-Message | Detailed message of log event
-UserId | Identity of the user
-ConnectionId | Identity of the connection
-ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side
-TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling`
+| Name | Description |
+| | - |
+| TimeGenerated | Log event time |
+| Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` |
+| OperationName | Operation name of the event |
+| Location | Location of your Azure SignalR Service |
+| Level | Log event level |
+| CallerIpAddress | IP address of your server/client |
+| Message | Detailed message of log event |
+| UserId | Identity of the user |
+| ConnectionId | Identity of the connection |
+| ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side |
+| TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling` |
## Troubleshoot with the resource logs
The difference between `ConnectionAborted` and `ConnectionEnded` is that `Connec
The abort reasons are listed in the following table:
-| Reason | Description |
-| - | - |
-| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit
-| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service |
-| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered
+| Reason | Description |
+| - | - |
+| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit |
+| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service |
+| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered |
#### Unexpected increase in connections
If you get 401 Unauthorized returned for client requests, check your resource lo
### Throttling
-If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
+If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
azure-web-pubsub Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-use-managed-identity.md
This article shows you how to create a managed identity for Azure Web PubSub Service and how to use it.
-> [!Important]
-> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity.
+> [!Important]
+> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity.
## Add a system-assigned identity
To set up a managed identity in the Azure portal, you'll first create an Azure W
2. Select **Identity**.
-4. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
+3. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
- :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
+ :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
## Add a user-assigned identity
Creating an Azure Web PubSub Service instance with a user-assigned identity requ
5. Search for the identity that you created earlier and selects it. Select **Add**.
- :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
+ :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
## Use a managed identity in client events scenarios
Azure Web PubSub Service is a fully managed service, so you can't use a managed
2. Navigate to the rule and switch on the **Authentication**.
- :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting":::
+ :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting":::
3. Select application. The application ID will become the `aud` claim in the obtained access token, which can be used as a part of validation in your event handler. You can choose one of the following:
- - Use default AAD application.
- - Select from existing AAD applications. The application ID of the one you choose will be used.
- - Specify an AAD application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
- > [!NOTE]
- > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests.
+ - Use default Microsoft Entra application.
+ - Select from existing Microsoft Entra applications. The application ID of the one you choose will be used.
+ - Specify a Microsoft Entra application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
+
+ > [!NOTE]
+ > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests.
### Validate access tokens
The token in the `Authorization` header is a [Microsoft identity platform access
To validate access tokens, your app should also validate the audience and the signing tokens. These need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration).
-The Azure Active Directory (Azure AD) middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
+The Microsoft Entra middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice.
-We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
+We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Microsoft Entra authorization libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
-Specially, if the event handler hosts in Azure Function or Web Apps, an easy way is to [Configure Azure AD login](../app-service/configure-authentication-provider-aad.md).
+Specially, if the event handler hosts in Azure Function or Web Apps, an easy way is to [Configure Microsoft Entra login](../app-service/configure-authentication-provider-aad.md).
## Use a managed identity for Key Vault reference
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-live-demo.md
In this quickstart, we use the *Client URL Generator* to generate a temporarily
In real-world applications, you can use SDKs in various languages build your own application. We also provide Function extensions for you to build serverless applications easily.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
description: A tutorial to walk through how to use Azure Web PubSub service and
-+ Last updated 05/05/2023
The Azure Web PubSub service helps you build real-time messaging web application
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Build a serverless real-time chat app
-> * Work with Web PubSub function trigger bindings and output bindings
-> * Deploy the function to Azure Function App
-> * Configure Azure Authentication
-> * Configure Web PubSub Event Handler to route events and messages to the application
+>
+> - Build a serverless real-time chat app
+> - Work with Web PubSub function trigger bindings and output bindings
+> - Deploy the function to Azure Function App
+> - Configure Azure Authentication
+> - Configure Web PubSub Event Handler to route events and messages to the application
## Prerequisites # [JavaScript](#tab/javascript)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-* [Node.js](https://nodejs.org/en/download/), version 10.x.
- > [!NOTE]
- > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Node.js](https://nodejs.org/en/download/), version 10.x.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
# [C# in-process](#tab/csharp-in-process)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
# [C# isolated process](#tab/csharp-isolated-process)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
In this tutorial, you learn how to:
1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory.
- # [JavaScript](#tab/javascript)
- ```bash
- func init --worker-runtime javascript
- ```
+ # [JavaScript](#tab/javascript)
+
+ ```bash
+ func init --worker-runtime javascript
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```bash
+ func init --worker-runtime dotnet
+ ```
- # [C# in-process](#tab/csharp-in-process)
- ```bash
- func init --worker-runtime dotnet
- ```
+ # [C# isolated process](#tab/csharp-isolated-process)
- # [C# isolated process](#tab/csharp-isolated-process)
- ```bash
- func init --worker-runtime dotnet-isolated
- ```
+ ```bash
+ func init --worker-runtime dotnet-isolated
+ ```
2. Install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`.
-
- # [JavaScript](#tab/javascript)
- Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
- ```json
- {
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.*, 4.0.0)"
- }
- }
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- ```bash
- dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- ```bash
- dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
- ```
+
+ # [JavaScript](#tab/javascript)
+
+ Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.*, 4.0.0)"
+ }
+ }
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```bash
+ dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ ```bash
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
+ ```
3. Create an `index` function to read and host a static web page for clients.
- ```bash
- func new -n index -t HttpTrigger
- ```
+
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)+ - Update `index/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
- Update `index/index.js` and copy following codes.
- ```js
- var fs = require('fs');
- var path = require('path');
-
- module.exports = function (context, req) {
- var index = context.executionContext.functionDirectory + '/../https://docsupdatetracker.net/index.html';
- context.log("https://docsupdatetracker.net/index.html path: " + index);
- fs.readFile(index, 'utf8', function (err, data) {
- if (err) {
- console.log(err);
- context.done(err);
- }
- context.res = {
- status: 200,
- headers: {
- 'Content-Type': 'text/html'
- },
- body: data
- };
- context.done();
- });
- }
- ```
+
+ ```js
+ var fs = require("fs");
+ var path = require("path");
+
+ module.exports = function (context, req) {
+ var index =
+ context.executionContext.functionDirectory + "/../https://docsupdatetracker.net/index.html";
+ context.log("https://docsupdatetracker.net/index.html path: " + index);
+ fs.readFile(index, "utf8", function (err, data) {
+ if (err) {
+ console.log(err);
+ context.done(err);
+ }
+ context.res = {
+ status: 200,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ body: data,
+ };
+ context.done();
+ });
+ };
+ ```
# [C# in-process](#tab/csharp-in-process)+ - Update `index.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("index")]
- public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
- {
- var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
- log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}.");
- return new ContentResult
- {
- Content = File.ReadAllText(indexFile),
- ContentType = "text/html",
- };
- }
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
+ ```c#
+ [FunctionName("index")]
+ public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
+ {
+ var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
+ log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}.");
+ return new ContentResult
+ {
+ Content = File.ReadAllText(indexFile),
+ ContentType = "text/html",
+ };
+ }
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `index.cs` and replace `Run` function with following codes.
- ```c#
- [Function("index")]
- public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
- {
- var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
- _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
-
- var response = req.CreateResponse();
- response.WriteString(File.ReadAllText(path));
- response.Headers.Add("Content-Type", "text/html");
- return response;
- }
- ```
+
+ ```c#
+ [Function("index")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
+ {
+ var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
+ _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
+
+ var response = req.CreateResponse();
+ response.WriteString(File.ReadAllText(path));
+ response.Headers.Add("Content-Type", "text/html");
+ return response;
+ }
+ ```
4. Create a `negotiate` function to help clients get service connection url with access token.
- ```bash
- func new -n negotiate -t HttpTrigger
- ```
- > [!NOTE]
- > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
-
- # [JavaScript](#tab/javascript)
- - Update `negotiate/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "webPubSubConnection",
- "name": "connection",
- "hub": "simplechat",
- "userId": "{headers.x-ms-client-principal-name}",
- "direction": "in"
- }
- ]
- }
- ```
- - Update `negotiate/index.js` and copy following codes.
- ```js
- module.exports = function (context, req, connection) {
- context.res = { body: connection };
- context.done();
- };
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- - Update `negotiate.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("negotiate")]
- public static WebPubSubConnection Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
- [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection,
- ILogger log)
- {
- log.LogInformation("Connecting...");
- return connection;
- }
- ```
- - Add `using` statements in header to resolve required dependencies.
- ```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- - Update `negotiate.cs` and replace `Run` function with following codes.
- ```c#
- [Function("negotiate")]
- public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
- [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
- {
- var response = req.CreateResponse(HttpStatusCode.OK);
- response.WriteAsJsonAsync(connectionInfo);
- return response;
- }
- ```
+
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+
+ > [!NOTE]
+ > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
+
+ # [JavaScript](#tab/javascript)
+
+ - Update `negotiate/function.json` and copy following json codes.
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "hub": "simplechat",
+ "userId": "{headers.x-ms-client-principal-name}",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+ - Update `negotiate/index.js` and copy following codes.
+ ```js
+ module.exports = function (context, req, connection) {
+ context.res = { body: connection };
+ context.done();
+ };
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```c#
+ [FunctionName("negotiate")]
+ public static WebPubSubConnection Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection,
+ ILogger log)
+ {
+ log.LogInformation("Connecting...");
+ return connection;
+ }
+ ```
+ - Add `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("negotiate")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
+ {
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.WriteAsJsonAsync(connectionInfo);
+ return response;
+ }
+ ```
5. Create a `message` function to broadcast client messages through service.+ ```bash func new -n message -t HttpTrigger ```
In this tutorial, you learn how to:
> This function is actually using `WebPubSubTrigger`. However, the `WebPubSubTrigger` is not integrated in function's template. We use `HttpTrigger` to initialize the function template and change trigger type in code. # [JavaScript](#tab/javascript)+ - Update `message/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "type": "webPubSubTrigger",
- "direction": "in",
- "name": "data",
- "hub": "simplechat",
- "eventName": "message",
- "eventType": "user"
- },
- {
- "type": "webPubSub",
- "name": "actions",
- "hub": "simplechat",
- "direction": "out"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "type": "webPubSubTrigger",
+ "direction": "in",
+ "name": "data",
+ "hub": "simplechat",
+ "eventName": "message",
+ "eventType": "user"
+ },
+ {
+ "type": "webPubSub",
+ "name": "actions",
+ "hub": "simplechat",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
- Update `message/index.js` and copy following codes.
- ```js
- module.exports = async function (context, data) {
- context.bindings.actions = {
- "actionName": "sendToAll",
- "data": `[${context.bindingData.request.connectionContext.userId}] ${data}`,
- "dataType": context.bindingData.dataType
- };
- // UserEventResponse directly return to caller
- var response = {
- "data": '[SYSTEM] ack.',
- "dataType" : "text"
- };
- return response;
- };
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- - Update `message.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("message")]
- public static async Task<UserEventResponse> Run(
- [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
- BinaryData data,
- WebPubSubDataType dataType,
- [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions)
- {
- await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
- BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
- dataType));
- return new UserEventResponse
- {
- Data = BinaryData.FromString("[SYSTEM] ack"),
- DataType = WebPubSubDataType.Text
- };
- }
- ```
- - Add `using` statements in header to resolve required dependencies.
- ```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- using Microsoft.Azure.WebPubSub.Common;
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- - Update `message.cs` and replace `Run` function with following codes.
- ```c#
- [Function("message")]
- [WebPubSubOutput(Hub = "simplechat")]
- public SendToAllAction Run(
- [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
- {
- return new SendToAllAction
- {
- Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
- DataType = request.DataType
- };
- }
- ```
+ ```js
+ module.exports = async function (context, data) {
+ context.bindings.actions = {
+ actionName: "sendToAll",
+ data: `[${context.bindingData.request.connectionContext.userId}] ${data}`,
+ dataType: context.bindingData.dataType,
+ };
+ // UserEventResponse directly return to caller
+ var response = {
+ data: "[SYSTEM] ack.",
+ dataType: "text",
+ };
+ return response;
+ };
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```c#
+ [FunctionName("message")]
+ public static async Task<UserEventResponse> Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
+ BinaryData data,
+ WebPubSubDataType dataType,
+ [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions)
+ {
+ await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
+ BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
+ dataType));
+ return new UserEventResponse
+ {
+ Data = BinaryData.FromString("[SYSTEM] ack"),
+ DataType = WebPubSubDataType.Text
+ };
+ }
+ ```
+ - Add `using` statements in header to resolve required dependencies.
+ ```c#
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ using Microsoft.Azure.WebPubSub.Common;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```c#
+ [Function("message")]
+ [WebPubSubOutput(Hub = "simplechat")]
+ public SendToAllAction Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
+ {
+ return new SendToAllAction
+ {
+ Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
+ DataType = request.DataType
+ };
+ }
+ ```
6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content.
- ```html
- <html>
- <body>
- <h1>Azure Web PubSub Serverless Chat App</h1>
- <div id="login"></div>
- <p></p>
- <input id="message" placeholder="Type to chat...">
- <div id="messages"></div>
- <script>
- (async function () {
- let authenticated = window.location.href.includes('?authenticated=true');
- if (!authenticated) {
- // auth
- let login = document.querySelector("#login");
- let link = document.createElement('a');
- link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`;
- link.text = "login";
- login.appendChild(link);
- }
- else {
- // negotiate
- let messages = document.querySelector('#messages');
- let res = await fetch(`${window.location.origin}/api/negotiate`, {
- credentials: "include"
- });
- let url = await res.json();
- // connect
- let ws = new WebSocket(url.url);
- ws.onopen = () => console.log('connected');
- ws.onmessage = event => {
- let m = document.createElement('p');
- m.innerText = event.data;
- messages.appendChild(m);
- };
- let message = document.querySelector('#message');
- message.addEventListener('keypress', e => {
- if (e.charCode !== 13) return;
- ws.send(message.value);
- message.value = '';
- });
- }
- })();
- </script>
- </body>
- </html>
- ```
-
- # [JavaScript](#tab/javascript)
-
- # [C# in-process](#tab/csharp-in-process)
- Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
- ```xml
- <ItemGroup>
- <None Update="https://docsupdatetracker.net/index.html">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
- ```xml
- <ItemGroup>
- <None Update="https://docsupdatetracker.net/index.html">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
+
+ ```html
+ <html>
+ <body>
+ <h1>Azure Web PubSub Serverless Chat App</h1>
+ <div id="login"></div>
+ <p></p>
+ <input id="message" placeholder="Type to chat..." />
+ <div id="messages"></div>
+ <script>
+ (async function () {
+ let authenticated = window.location.href.includes(
+ "?authenticated=true"
+ );
+ if (!authenticated) {
+ // auth
+ let login = document.querySelector("#login");
+ let link = document.createElement("a");
+ link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`;
+ link.text = "login";
+ login.appendChild(link);
+ } else {
+ // negotiate
+ let messages = document.querySelector("#messages");
+ let res = await fetch(`${window.location.origin}/api/negotiate`, {
+ credentials: "include",
+ });
+ let url = await res.json();
+ // connect
+ let ws = new WebSocket(url.url);
+ ws.onopen = () => console.log("connected");
+ ws.onmessage = (event) => {
+ let m = document.createElement("p");
+ m.innerText = event.data;
+ messages.appendChild(m);
+ };
+ let message = document.querySelector("#message");
+ message.addEventListener("keypress", (e) => {
+ if (e.charCode !== 13) return;
+ ws.send(message.value);
+ message.value = "";
+ });
+ }
+ })();
+ </script>
+ </body>
+ </html>
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
## Create and Deploy the Azure Function App Before you can deploy your function code to Azure, you need to create three resources:
-* A resource group, which is a logical container for related resources.
-* A storage account, which is used to maintain state and other information about your functions.
-* A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
-Use the following commands to create these items.
+- A resource group, which is a logical container for related resources.
+- A storage account, which is used to maintain state and other information about your functions.
+- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
+
+Use the following commands to create these items.
1. If you haven't done so already, sign in to Azure:
- ```azurecli
- az login
- ```
+ ```azurecli
+ az login
+ ```
1. Create a resource group or you can skip by reusing the one of Azure Web PubSub service:
- ```azurecli
- az group create -n WebPubSubFunction -l <REGION>
- ```
+ ```azurecli
+ az group create -n WebPubSubFunction -l <REGION>
+ ```
1. Create a general-purpose storage account in your resource group and region:
- ```azurecli
- az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction
- ```
+ ```azurecli
+ az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction
+ ```
1. Create the function app in Azure:
- # [JavaScript](#tab/javascript)
+ # [JavaScript](#tab/javascript)
+
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
- > [!NOTE]
- > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+ > [!NOTE]
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
- # [C# in-process](#tab/csharp-in-process)
+ # [C# in-process](#tab/csharp-in-process)
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- # [C# isolated process](#tab/csharp-isolated-process)
+ # [C# isolated process](#tab/csharp-isolated-process)
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
1. Deploy the function project to Azure:
- After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+ After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+
+ ```bash
+ func azure functionapp publish <FUNCIONAPP_NAME>
+ ```
- ```bash
- func azure functionapp publish <FUNCIONAPP_NAME>
- ```
1. Configure the `WebPubSubConnectionString` for the function app: First, find your Web PubSub resource from **Azure Portal** and copy out the connection string under **Keys**. Then, navigate to Function App settings in **Azure Portal** -> **Settings** -> **Configuration**. And add a new item under **Application settings**, with name equals `WebPubSubConnectionString` and value is your Web PubSub resource connection string.
Go to **Azure portal** -> Find your Function App resource -> **App keys** -> **S
Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use. Replace the `<FUNCTIONAPP_NAME>` and `<APP_KEY>` to yours.
- - Hub Name: `simplechat`
- - URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>**
- - User Event Pattern: *
- - System Events: -(No need to configure in this sample)
+- Hub Name: `simplechat`
+- URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>**
+- User Event Pattern: \*
+- System Events: -(No need to configure in this sample)
:::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler.":::
Go to **Azure portal** -> Find your Function App resource -> **Authentication**.
Here we choose `Microsoft` as identify provider, which uses `x-ms-client-principal-name` as `userId` in the `negotiate` function. Besides, you can configure other identity providers following the links, and don't forget update the `userId` value in `negotiate` function accordingly.
-* [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md)
-* [Facebook](../app-service/configure-authentication-provider-facebook.md)
-* [Google](../app-service/configure-authentication-provider-google.md)
-* [Twitter](../app-service/configure-authentication-provider-twitter.md)
+- [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md)
+- [Facebook](../app-service/configure-authentication-provider-facebook.md)
+- [Google](../app-service/configure-authentication-provider-google.md)
+- [Twitter](../app-service/configure-authentication-provider-twitter.md)
## Try the application Now you're able to test your page from your function app: `https://<FUNCTIONAPP_NAME>.azurewebsites.net/api/index`. See snapshot.+ 1. Click `login` to auth yourself. 2. Type message in the input box to chat.
If you're not going to continue to use this app, delete all resources created by
## Next steps
-In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Quick start: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
-> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
As illustrated by the above workflow graph, and also detailed workflow described
In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure Web PubSub Service. <a name="signing"></a>+ #### Signing Algorithm and Signature `HS256`, namely HMAC-SHA256, is used as the signing algorithm.
You should use the `AccessKey` in Azure Web PubSub Service instance's connection
Below claims are required to be included in the JWT token.
-Claim Type | Is Required | Description
-||
-`aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`.
-`exp` | true | Epoch time when this token will be expired.
+| Claim Type | Is Required | Description |
+| - | -- | - |
+| `aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`. |
+| `exp` | true | Epoch time when this token will be expired. |
A pseudo code in JS:+ ```js const bearerToken = jwt.sign({}, connectionString.accessKey, {
- audience: request.url,
- expiresIn: "1h",
- algorithm: "HS256",
- });
+ audience: request.url,
+ expiresIn: "1h",
+ algorithm: "HS256",
+});
```
-### Authenticate via Azure Active Directory Token (Azure AD Token)
+### Authenticate via Microsoft Entra token
-Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
+**The difference is**, in this scenario, JWT Token is generated by Microsoft Entra ID.
-[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
+[Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md)
The credential scope used should be `https://webpubsub.azure.com/.default`.
You could also use **Role Based Access Control (RBAC)** to authorize the request
[Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal)
-## APIs
+## APIs
-| Operation Group | Description |
-|--|-|
-|[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status |
-|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
+| Operation Group | Description |
+| -- | |
+| [Service Status](/rest/api/webpubsub/dataplane/health-api) | Provides operations to check the service status |
+| [Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub) | Provides operations to manage the connections and send messages to them. |
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-csharp.md
-+ Last updated 11/11/2021
You can use this library in your app server side to manage the WebSocket client
Use this library to: -- Send messages to hubs and groups.
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups. - Close connections
You can also [enable console logging](https://github.com/Azure/azure-sdk-for-net
[azure_sub]: https://azure.microsoft.com/free/dotnet/ [samples_ref]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Azure.Messaging.WebPubSub/tests/Samples/
-[awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
+[awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp
azure-web-pubsub Reference Server Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-java.md
-+ Last updated 01/31/2023 + # Azure Web PubSub service client library for Java [Azure Web PubSub service](./index.yml) is an Azure-managed service that helps developers easily build web applications with real-time features and a publish-subscribe pattern. Any scenario that requires real-time publish-subscribe messaging between server and clients or among clients can use Azure Web PubSub service. Traditional real-time features that often require polling from the server or submitting HTTP requests can also use Azure Web PubSub service.
Use this library to:
For more information, see: -- [Azure Web PubSub client library Java SDK][source_code] -- [Azure Web PubSub client library reference documentation][api]
+- [Azure Web PubSub client library Java SDK][source_code]
+- [Azure Web PubSub client library reference documentation][api]
- [Azure Web PubSub client library samples for Java][samples_readme] - [Azure Web PubSub service documentation][product_documentation]
For more information, see:
### Include the Package
-[//]: # ({x-version-update-start;com.azure:azure-messaging-webpubsub;current})
+[//]: # "{x-version-update-start;com.azure:azure-messaging-webpubsub;current}"
```xml <dependency>
For more information, see:
</dependency> ```
-[//]: # ({x-version-update-end})
+[//]: # "{x-version-update-end}"
### Create a `WebPubSubServiceClient` using connection string <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L21-L24 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .connectionString("{connection-string}")
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
### Create a `WebPubSubServiceClient` using access key <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L31-L35 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .credential(new AzureKeyCredential("{access-key}"))
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
### Broadcast message to entire hub <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L47-L47 -->+ ```java webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN
### Broadcast message to a group <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L59-L59 -->+ ```java webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.T
### Send message to a connection <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L71-L71 -->+ ```java webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", W
<a name="send-to-user"></a> ### Send message to a user+ <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L83-L83 -->+ ```java webPubSubServiceClient.sendToUser("Andy", "Hello Andy!", WebPubSubContentType.TEXT_PLAIN); ```
the client library to use the Netty HTTP client. Configuring or changing the HTT
By default, all client libraries use the Tomcat-native Boring SSL library to enable native-level performance for SSL operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides
-better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning].
+better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning].
[!INCLUDE [next step](includes/include-next-step.md)]
better performance compared to the default SSL implementation within the JDK. Fo
[coc]: https://opensource.microsoft.com/codeofconduct/ [coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/ [coc_contact]: mailto:opencode@microsoft.com
-[api]: /java/api/overview/azure/messaging-webpubsub-readme
+[api]: /java/api/overview/azure/messaging-webpubsub-readme
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
-+ Last updated 11/11/2021
npm install @azure/web-pubsub
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
``` You can also authenticate the `WebPubSubServiceClient` using an endpoint and an `AzureKeyCredential`: ```js
-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+const {
+ WebPubSubServiceClient,
+ AzureKeyCredential,
+} = require("@azure/web-pubsub");
const key = new AzureKeyCredential("<Key>");
-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<Endpoint>",
+ key,
+ "<hubName>"
+);
```
-Or authenticate the `WebPubSubServiceClient` using [Azure Active Directory][aad_doc]
+Or authenticate the `WebPubSubServiceClient` using [Microsoft Entra ID][microsoft_entra_id_doc]
1. Install the `@azure/identity` dependency
npm install @azure/identity
1. Update the source code to use `DefaultAzureCredential`: ```js
-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+const {
+ WebPubSubServiceClient,
+ AzureKeyCredential,
+} = require("@azure/web-pubsub");
const key = new DefaultAzureCredential();
-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<Endpoint>",
+ key,
+ "<hubName>"
+);
``` ### Examples
const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>")
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Get the access token for the WebSocket client connection to use let token = await serviceClient.getClientAccessToken();
token = await serviceClient.getClientAccessToken({ userId: "user1" });
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Send a JSON message await serviceClient.sendToAll({ message: "Hello world!" });
await serviceClient.sendToAll(payload.buffer);
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
const groupClient = serviceClient.group("<groupName>");
await groupClient.sendToAll(payload.buffer);
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Send a JSON message await serviceClient.sendToUser("user1", { message: "Hello world!" }); // Send a plain text message
-await serviceClient.sendToUser("user1", "Hi there!", { contentType: "text/plain" });
+await serviceClient.sendToUser("user1", "Hi there!", {
+ contentType: "text/plain",
+});
// Send a binary message const payload = new Uint8Array(10);
await serviceClient.sendToUser("user1", payload.buffer);
const { WebPubSubServiceClient } = require("@azure/web-pubsub"); const WebSocket = require("ws");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
const groupClient = serviceClient.group("<groupName>");
const { WebPubSubServiceClient } = require("@azure/web-pubsub");
function onResponse(rawResponse: FullOperationResponse): void { console.log(rawResponse); }
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
await serviceClient.sendToAll({ message: "Hello world!" }, { onResponse }); ```
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const handler = new WebPubSubEventHandler("chat", {
handleConnect: (req, res) => { // auth the connection and set the userId of the connection res.success({
- userId: "<userId>"
+ userId: "<userId>",
}); },
- allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"]
+ allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"],
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express");
const handler = new WebPubSubEventHandler("chat", { allowedEndpoints: [ "https://<yourAllowedService1>.webpubsub.azure.com",
- "https://<yourAllowedService2>.webpubsub.azure.com"
- ]
+ "https://<yourAllowedService2>.webpubsub.azure.com",
+ ],
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const express = require("express");
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express"); const handler = new WebPubSubEventHandler("chat", {
- path: "/customPath1"
+ path: "/customPath1",
}); const app = express();
app.use(handler.getMiddleware());
app.listen(3000, () => // Azure WebPubSub Upstream ready at http://localhost:3000/customPath1
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const handler = new WebPubSubEventHandler("chat", {
// You can also set the state here res.setState("calledTime", calledTime); res.success();
- }
+ },
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
For more detailed instructions on how to enable logs, see [@azure/logger package
Use **Live Trace** from the Web PubSub service portal to view the live traffic.
-[aad_doc]: howto-authorize-from-application.md
+[microsoft_entra_id_doc]: howto-authorize-from-application.md
[azure_sub]: https://azure.microsoft.com/free/ [samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/ ## Next steps
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-python.md
description: Learn about the Python server SDK for the Azure Web PubSub service.
-+ Last updated 05/23/2022
Or use the service endpoint and the access key:
>>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=AzureKeyCredential("<access_key>")) ```
-Or use [Azure Active Directory][aad_doc] (Azure AD):
+Or use [Microsoft Entra ID][microsoft_entra_id_doc]:
1. [pip][pip] install [`azure-identity`][azure_identity_pip].
-2. [Enable Azure AD authentication on your Webpubsub resource][aad_doc].
+2. [Enable Microsoft Entra authorization on your Webpubsub resource][microsoft_entra_id_doc].
3. Update code to use [DefaultAzureCredential][default_azure_credential].
- ```python
- >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
- >>> from azure.identity import DefaultAzureCredential
- >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential())
- ```
+ ```python
+ >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
+ >>> from azure.identity import DefaultAzureCredential
+ >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential())
+ ```
## Examples
When you submit a pull request, a CLA-bot automatically determines whether you n
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see [Code of Conduct][code_of_conduct] FAQ or contact [Open Source Conduct Team](mailto:opencode@microsoft.com) with questions or comments. <!-- LINKS -->+ [webpubsubservice_docs]: ./index.yml [azure_cli]: /cli/azure [azure_sub]: https://azure.microsoft.com/free/
This project has adopted the Microsoft Open Source Code of Conduct. For more inf
[connection_string]: howto-websocket-connect.md#authorization [azure_portal]: howto-develop-create-instance.md [azure-key-credential]: https://aka.ms/azsdk-python-core-azurekeycredential
-[aad_doc]: howto-authorize-from-application.md
+[microsoft_entra_id_doc]: howto-authorize-from-application.md
[samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/webpubsub/azure-messaging-webpubsubservice/samples
azure-web-pubsub Samples Authenticate And Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-authenticate-and-connect.md
Title: Azure Web PubSub samples - authenticate and connect
-description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s)
+description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s)
Last updated 05/15/2023
zone_pivot_groups: azure-web-pubsub-samples-authenticate-and-connect + # Azure Web PubSub samples - Authenticate and connect To make use of your Azure Web PubSub resource, you need to authenticate and connect to the service first. Azure Web PubSub service distinguishes two roles and they're given a different set of capabilities.
-
+ ## Client
-The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages.
+
+The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages.
## Application server
-While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource.
+
+While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource.
::: zone pivot="method-sdk-csharp"
-| Use case | Description |
+| Use case | Description |
| | -- |
-| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only.
-| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
-| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point.
+| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only.
+| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point.
::: zone-end ::: zone pivot="method-sdk-javascript"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/server.js#L9) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/src/index.js#L5) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
::: zone-end ::: zone pivot="method-sdk-java"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/java/com/webpubsub/tutorial/App.java#L21) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/resources/public/https://docsupdatetracker.net/index.html#L12) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
::: zone-end ::: zone pivot="method-sdk-python"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/server.py#L19) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/public/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
+| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization.
::: zone-end
azure-web-pubsub Socketio Migrate From Self Hosted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-migrate-from-self-hosted.md
# How to migrate a self-hosted Socket.IO app to be fully managed on Azure
->[!NOTE]
-> Web PubSub for Socket.IO is in "Private Preview" and is available to selected customers only. To register your interest, please write to us awps@microsoft.com.
## Prerequisites > [!div class="checklist"]
Locate `index.js` in the server-side code.
```javascript const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); ```-
-3. Add configuration so that the server can connect with your Web PubSub for Socket.IO resource.
+
+3. Locate in your server-side code where Socket.IO server is created and wrap it with `useAzureSocketIO()`:
```javascript
- const wpsOptions = {
+ const io = require("socket.io")();
+ useAzureSocketIO(io, {
hub: "eio_hub", // The hub name can be any valid string. connectionString: process.argv[2]
- };
- ```
-
-4. Locate in your server-side code where Socket.IO server is created and append `.useAzureSocketIO(wpsOptions)`:
- ```javascript
- const io = require("socket.io")();
- useAzureSocketIO(io, wpsOptions);
+ });
```
->[!IMPORTANT]
-> `useAzureSocketIO` is an asynchronous method. Here we `await`. So you need to wrap it and related code in an asynchronous function.
+ >[!IMPORTANT]
+ > `useAzureSocketIO` is an asynchronous method and it does initialization steps to connect to Web PubSub. You can `await useAzureSocketIO(...)` or use `useAzureSocketIO(...).then(...)` to make sure your app server starts to serve requests after the initialization succeeds.
-5. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO.
+4. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO.
- [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms) - [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms) - [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom)
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
Title: Azure Kubernetes Service (AKS) backup using Azure Backup prerequisites
description: This article explains the prerequisites for Azure Kubernetes Service (AKS) backup. Previously updated : 07/27/2023 Last updated : 08/17/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- Extension agent and extension operator are the core platform components in AKS, which are installed when an extension of any type is installed for the first time in an AKS cluster. These provide capabilities to deploy *1P* and *3P* extensions. The backup extension also relies on these for installation and upgrades. -- Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster.---
+ >[!Note]
+ >Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster.
Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#backup-extension-related-operations).
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 03/27/2023 Last updated : 08/17/2023
AKS backup is available in all the Azure public cloud regions: East US, North Eu
## Limitations -- AKS backup supports AKS clusters with Kubernetes version 1.21.1 or later. This version has Container Storage Interface (CSI) drivers installed.
+- AKS backup supports AKS clusters with Kubernetes version *1.22* or later. This version has Container Storage Interface (CSI) drivers installed.
-- A CSI driver supports performing backup and restore operations for persistent volumes.
+- Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster).
-- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). If you're using Azure Files shares and Azure Blob Storage persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [About Azure file share backup](azure-file-share-backup-overview.md) and [Overview of Azure Blob Storage backup](blob-backup-overview.md).
+- AKS backups don't support in-tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](../aks/csi-migrate-in-tree-volumes.md).
-- AKS backups don't support tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](../aks/csi-migrate-in-tree-volumes.md).
+- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). Also, these persistent volumes should be dynamically provisioned as static volumes are not supported.
-- Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster).
+- Azure Files shares and Azure Blob Storage persistent volumes are currently not supported by AKS backup due to lack of CSI Driver-based snapshotting capability. If you're using said persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [Azure file share backup](azure-file-share-backup-overview.md) and [Azure Blob Storage backup](blob-backup-overview.md).
+
+- Any unsupported persistent volume type is skipped while a backup is being created for the AKS cluster.
-- The backup extension uses the AKS cluster's managed system identity to perform backup operations. So, an AKS backup doesn't support AKS clusters that use a service principal. You can [update your AKS cluster to use a managed system identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
+- The backup extension uses the AKS cluster's system identity to do the backup operations. Currently, AKS clusters using a User Identity, or a Service Principal aren't supported. If your AKS cluster uses a Service Principal, you can [update your AKS cluster to use a System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).
- You must install the backup extension in the AKS cluster. If you're using Azure CLI to install the backup extension, ensure that the version is 2.41 or later. Use `az upgrade` command to upgrade the Azure CLI.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 07/13/2023 Last updated : 08/21/2023
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk. The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> When you choose a Vault-Standard recovery point, a VHD file with the content of the chosen recovery point is also created in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> Works with [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and [Cross Zonal Restore](backup-azure-arm-restore-vms.md#create-a-vm). <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots tier](backup-azure-vms-introduction.md#snapshot-creation) recovery points. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed) and [ADE encrypted VMs](backup-azure-vms-encryption.md#encryption-support-using-ade).
+**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, select the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md
Title: Use PowerShell to back up Windows Server to Azure
description: In this article, learn how to use PowerShell to set up Azure Backup on Windows Server or a Windows client, and manage backup and recovery. Last updated 08/24/2021 -+
Invoke-Command -Session $Session -Script { param($D, $A) Start-Process -FilePath
For more information about Azure Backup for Windows Server/Client: * [Introduction to Azure Backup](./backup-overview.md)
-* [Back up Windows Servers](backup-windows-with-mars-agent.md)
+* [Back up Windows Servers](backup-windows-with-mars-agent.md)
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 07/05/2023 Last updated : 08/21/2023
Back up Linux Azure VMs with the Linux Azure VM agent | Supported for file-consi
Back up Linux Azure VMs with the MARS agent | Not supported.<br/><br/> The MARS agent can be installed only on Windows machines. Back up Linux Azure VMs with DPM or MABS | Not supported. Back up Linux Azure VMs with Docker mount points | Currently, Azure Backup doesn't support exclusion of Docker mount points because these are mounted at different paths every time.
+Backup Linux Azure VMs with ZFS Pool Configuration | Not supported
## Operating system support (Linux)
Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app serve
**Restore disk** | This option restores a VM disk, which can you can then use to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings and create a VM.<br/><br/> The disks are copied to the resource group that you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM by using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured via the template or PowerShell. **Replace existing** | You can restore a disk and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it has been deleted, you can't use this option.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and it stores the snapshot in the staging location that you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>This option is supported for unencrypted managed VMs and for VMs [created from custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or fewer disks than the current VM, the number of disks in the restore point will only reflect the VM configuration.<br><br> This option is also supported for VMs with linked resources, like [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) and [Azure Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | You can use cross-region restore to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Backup admins and app admins have permissions to perform the restore operation on a secondary region.
-**Cross Subscription (preview)** | You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (within the same tenant as the source subscription) from restore points. This is one of the Azure role-based access control (RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-subscription restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups), and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Subscription** | Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (within the same tenant as the source subscription) from restore points. This is one of the Azure role-based access control (Azure RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) tier recovery points. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed) and [VMs with disks having Azure Encryptions (ADE)](backup-azure-vms-encryption.md#encryption-support-using-ade).
+**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones. You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, select the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
## Support for file-level restore
The following table summarizes support for backup during VM management tasks, su
**Restore** | **Supported** |
-<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
[Restore across a region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported. <a name="backup-azure-cross-zonal-restore">Restore across a zone</a> | [Cross-zonal restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. Restore to an existing VM | Use the replace disk option.
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md
Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 07/21/2023 Last updated : 08/17/2023
The retention period for a backup also follows the maximum limit of 450 snapshot
For example, if the scheduling frequency for backups is set as Daily, then you can set the retention period for backups at a maximum value of 450 days. Similarly, if the scheduling frequency for backups is set as Hourly with a 1-hour frequency, then you can set the retention for backups at a maximum value of 18 days.
+## Why do I see more snapshots than my retention policy?
+
+If a retention policy is set as *1*, you can find two snapshots. This configuration ensures that at least one latest recovery point is always present in the vault, if all subsequent backups fail due to any issue. This causes the presence of two snapshots.
+
+So, if the policy is for *n* snapshots, you can find *n+1* snapshots at times. Further, you can even find *n+1+2* snapshots if there is a delay in deletion of recovery points whose retention period is over (garbage collection). This can happen at rare times when:
+
+- You clean up snapshots, which are past retentions.
+- The garbage collector (GC) in the backend is under heavy load.
+ ## Pricing Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md) of the managed disk. Incremental snapshots are charged per GiB of the storage occupied by the delta changes since the last snapshot. For example, if you're using a managed disk with a provisioned size of 128 GiB, with 100 GiB used, the first incremental snapshot is billed only for the used size of 100 GiB. 20 GiB of data is added on the disk before you create the second snapshot. Now, the second incremental snapshot is billed for only 20 GiB.
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
The following table describes the network topologies supported by each network f
|Connectivity over Active/Passive VPN gateways| Yes | |Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No |
-|Transit connectivity via vWAN for Spoke Delegated VNETS| No |
+|Transit connectivity via vWAN for Spoke Delegated VNETS| Yes |
|On-premises connectivity to Delegated subnet via vWAN attached SD-WAN| No| |On-premises connectivity via Secured HUB(Az Firewall NVA) | No| |Connectivity from UVMs on NC2 nodes to Azure resources|Yes|
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 08/08/2023 Last updated : 08/16/2023 # Azure Bastion FAQ
No. You don't need to install an agent or any software on your browser or your A
See [About VM connections and features](vm-about.md) for supported features.
+### <a name="shareable-links-passwords"></a>Is Reset Password available for local users connecting via shareable link?
+
+No. Some organizations have company policies that require a password reset when a user logs into a local account for the first time. When using shareable links, the user can't change the password, even though a "Reset Password" button may appear.
+ ### <a name="audio"></a>Is remote audio available for VMs? Yes. See [About VM connections and features](vm-about.md#audio).
This may be due to the Private DNS zone for privatelink.azure.com linked to the
## Next steps
-For more information, see [What is Azure Bastion](bastion-overview.md).
+For more information, see [What is Azure Bastion](bastion-overview.md).
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-a-storage-account-with-cdn.md
Last updated 04/29/2022-+ # Quickstart: Integrate an Azure Storage account with Azure CDN
To create a storage account, you must be either the service administrator or a c
## Enable Azure CDN for the storage account
-1. On the page for your storage account, select **Blob service** > **Azure CDN** from the left menu. The **Azure CDN** page appears.
+1. On the page for your storage account, select **Security + Networking** > **Front Door and CDN** from the left menu. The **Front Door and CDN** page appears.
- :::image type="content" source="./media/cdn-create-a-storage-account-with-cdn/cdn-storage-endpoint-configuration.png" alt-text="Screenshot of create a CDN endpoint.":::
+ :::image type="content" source="./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-endpoint-configuration.png" alt-text="Screenshot of create a CDN endpoint." lightbox="./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-endpoint-configuration.png":::
1. In the **New endpoint** section, enter the following information: | Setting | Value | | -- | -- |
- | **CDN profile** | Select **Create new** and enter your profile name, for example, *cdn-profile-123*. A profile is a collection of endpoints. |
- | **Pricing tier** | Select one of the **Standard** options, such as **Microsoft CDN (classic)**. |
- | **CDN endpoint name** | Enter your endpoint hostname, such as *cdn-endpoint-123*. This name must be globally unique across Azure because it's to access your cached resources at the URL _&lt;endpoint-name&gt;_.azureedge.net. |
- | **Origin hostname** | By default, a new CDN endpoint uses the hostname of your storage account as the origin server. |
-
+ | **Service type** | **Azure CDN** |
+ | **Create new/use existing profile** | **Create new** |
+ | **Profile name** | Enter your profile name, for example, *cdn-profile-123*. A profile is a collection of endpoints. |
+ | **CDN endpoint name** | Enter your endpoint hostname, such as *cdn-endpoint-123*. This name must be globally unique across Azure because it's to access your cached resources at the URL _&lt;endpoint-name&gt;_.azureedge.net. |
+ | **Origin hostname** | By default, a new CDN endpoint uses the hostname of your storage account as the origin server. |
+ | **Pricing tier** | Select one of the options, such as **Microsoft CDN (classic)**. |
+
1. Select **Create**. After the endpoint is created, it appears in the endpoint list.
- ![Storage new CDN endpoint](./media/cdn-create-a-storage-account-with-cdn/cdn-storage-new-endpoint-list.png)
+ [ ![Screenshot of a storage new CDN endpoint.](./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-new-endpoint-list.png) ](./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-new-endpoint-list.png#lightbox)
> [!TIP] > If you want to specify advanced configuration settings for your CDN endpoint, such as [large file download optimization](cdn-optimization-overview.md#large-file-download), you can instead use the [Azure CDN extension](cdn-create-new-endpoint.md) to create a CDN profile and endpoint.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 8/9/2023 Last updated : 8/21/2023
The following tables show the Microsoft Security Response Center (MSRC) updates
## August 2023 Guest OS
->[!NOTE]
-
->The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change.
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 23-08 | [5029247] | Latest Cumulative Update(LCU) | 6.61 | Aug 8, 2023 |
-| Rel 23-08 | [5029250] | Latest Cumulative Update(LCU) | 7.29 | Aug 8, 2023 |
-| Rel 23-08 | [5029242] | Latest Cumulative Update(LCU) | 5.85 | Aug 8, 2023 |
-| Rel 23-08 | [5028969] | .NET Framework 3.5 Security and Quality Rollup | 2.141 | Aug 8, 2023 |
-| Rel 23-08 | [5028963] | .NET Framework 4.7.2 Security and Quality Rollup | 2.141 | Aug 8, 2023 |
-| Rel 23-08 | [5028970] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [5028962] | .NET Framework 4.7.2 Cumulative Update LKG | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [5028967] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5028961] | .NET Framework 4.7.2 Cumulative Update LKG | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5028960] | .NET Framework DotNet | 6.61 | Aug 8, 2023 |
-| Rel 23-08 | [5028956] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.29 | Aug 8, 2023 |
-| Rel 23-08 | [5029296] | Monthly Rollup | 2.141 | Aug 8, 2023 |
-| Rel 23-08 | [5029295] | Monthly Rollup | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5029312] | Monthly Rollup | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [5029369] | Servicing Stack Update | 3.129 | Aug 8, 2023 |
-| Rel 23-08 | [5029368] | Servicing Stack Update LKG | 4.121 | Aug 8, 2023 |
-| Rel 23-08 | [4578013] | OOB Standalone Security Update | 4.121 | Aug 19, 2020 |
-| Rel 23-08 | [5023788] | Servicing Stack Update LKG | 5.85 | Mar 14, 2023 |
-| Rel 23-08 | [5028264] | Servicing Stack Update LKG | 2.141 | Jul 11, 2023 |
-| Rel 23-08 | [4494175] | Microcode | 5.85 | Sep 1, 2020 |
-| Rel 23-08 | [4494174] | Microcode | 6.61 | Sep 1, 2020 |
-| Rel 23-08 | 5029395 | Servicing Stack Update | 7.29 | |
-| Rel 23-08 | 5028316 | Servicing Stack Update | 6.61 | |
+| Rel 23-08 | [5029247] | Latest Cumulative Update(LCU) | [6.61] | Aug 8, 2023 |
+| Rel 23-08 | [5029250] | Latest Cumulative Update(LCU) | [7.30] | Aug 8, 2023 |
+| Rel 23-08 | [5029242] | Latest Cumulative Update(LCU) | [5.85] | Aug 8, 2023 |
+| Rel 23-08 | [5028969] | .NET Framework 3.5 Security and Quality Rollup | [2.141] | Aug 8, 2023 |
+| Rel 23-08 | [5028963] | .NET Framework 4.7.2 Security and Quality Rollup | [2.141] | Aug 8, 2023 |
+| Rel 23-08 | [5028970] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [5028962] | .NET Framework 4.7.2 Cumulative Update LKG | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [5028967] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5028961] | .NET Framework 4.7.2 Cumulative Update LKG | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5028960] | .NET Framework DotNet | [6.61] | Aug 8, 2023 |
+| Rel 23-08 | [5028956] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.30] | Aug 8, 2023 |
+| Rel 23-08 | [5029296] | Monthly Rollup | [2.141] | Aug 8, 2023 |
+| Rel 23-08 | [5029295] | Monthly Rollup | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5029312] | Monthly Rollup | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [5029369] | Servicing Stack Update | [3.129] | Aug 8, 2023 |
+| Rel 23-08 | [5029368] | Servicing Stack Update LKG | [4.121] | Aug 8, 2023 |
+| Rel 23-08 | [4578013] | OOB Standalone Security Update | [4.121] | Aug 19, 2020 |
+| Rel 23-08 | [5023788] | Servicing Stack Update LKG | [5.85] | Mar 14, 2023 |
+| Rel 23-08 | [5028264] | Servicing Stack Update LKG | [2.141] | Jul 11, 2023 |
+| Rel 23-08 | [4494175] | Microcode | [5.85] | Sep 1, 2020 |
+| Rel 23-08 | [4494174] | Microcode | [6.61] | Sep 1, 2020 |
+| Rel 23-08 | 5029395 | Servicing Stack Update | [7.30] | |
+| Rel 23-08 | 5028316 | Servicing Stack Update | [6.61] | |
[5029247]: https://support.microsoft.com/kb/5029247 [5029250]: https://support.microsoft.com/kb/5029250
The following tables show the Microsoft Security Response Center (MSRC) updates
[4494174]: https://support.microsoft.com/kb/4494174 [5029395]: https://support.microsoft.com/kb/5029395 [5028316]: https://support.microsoft.com/kb/5028316
+[2.141]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.129]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.121]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.85]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.61]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.30]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## July 2023 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 07/27/2023 Last updated : 08/21/2023
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **August 21, 2023**
+The August Guest OS has released.
+ ###### **July 27, 2023** The July Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
-| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.30 |
-| WA-GUEST-OS-7.27_202306-02 | July 8, 2023 | Post 7.29 |
+| WA-GUEST-OS-7.30_202308-01 | August 21, 2023 | Post 7.32 |
+| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.31 |
+|~~WA-GUEST-OS-7.27_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-7.25_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-7.24_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-7.23_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.61_202308-01 | August 21, 2023 | Post 6.63 |
| WA-GUEST-OS-6.60_202307-01 | July 27, 2023 | Post 6.62 |
-| WA-GUEST-OS-6.59_202306-02 | July 8, 2023 | Post 6.61 |
+|~~WA-GUEST-OS-6.59_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-6.57_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-6.56_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-6.55_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.85_202308-01 | August 21, 2023 | Post 5.87 |
| WA-GUEST-OS-5.84_202307-01 | July 27, 2023 | Post 5.86 |
-| WA-GUEST-OS-5.83_202306-02 | July 8, 2023 | Post 5.85 |
+|~~WA-GUEST-OS-5.83_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-5.81_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-5.80_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-5.79_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.121_202308-01 | August 21, 2023 | Post 4.123 |
| WA-GUEST-OS-4.120_202307-01 | July 27, 2023 | Post 4.122 |
-| WA-GUEST-OS-4.119_202306-02 | July 8, 2023 | Post 4.121 |
+|~~WA-GUEST-OS-4.119_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-4.117_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-4.116_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-4.115_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.129_202308-01 | August 21, 2023 | Post 3.131 |
| WA-GUEST-OS-3.128_202307-01 | July 27, 2023 | Post 3.130 |
-| WA-GUEST-OS-3.127_202306-02 | July 8, 2023 | Post 3.129 |
+|~~WA-GUEST-OS-3.127_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-3.125_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-3.124_202304-02~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-3.122_202303-01~~| March 28, 2023 | May 19, 2023 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.141_202308-01 | August 21, 2023 | Post 2.143 |
| WA-GUEST-OS-2.140_202307-01 | July 27, 2023 | Post 2.142 |
-| WA-GUEST-OS-2.139_202306-02 | July 8, 2023 | Post 2.141 |
+|~~WA-GUEST-OS-2.139_202306-02~~| July 8, 2023 | August 21, 2023 |
|~~WA-GUEST-OS-2.137_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-2.136_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-2.135_202303-01~~| March 28, 2023 | May 19, 2023 |
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
ms.contributor: jahelmic
Last updated 05/03/2023 tags: azure-resource-manager+ Title: Azure Cloud Shell troubleshooting # Troubleshooting & Limitations of Azure Cloud Shell
communication-services Incoming Call Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md
# Incoming call concepts
-Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call.
+Azure Communication Services Call Automation enables developers to create applications that can make and receive calls. It leverages Event Grid subscriptions to deliver `IncomingCall` events, making it crucial to configure your environment to receive these notifications for your application to redirect or answer a call effectively. Therefore, understanding the fundamentals of incoming calls is essential for leveraging the full potential of Azure Communication Services Call Automation.
## Calling scenarios
-First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number triggers an `IncomingCall` event. The following are examples of these resources:
+Before setting up your environment, it's important to understand the scenarios that can trigger an `IncomingCall` event. To trigger an `IncomingCall` event, a call must be made to either an Azure Communication Services identity or a Public Switched Telephone Network (PSTN) number associated with your Azure Communication Services resource. The following are examples of these resources:
1. An Azure Communication Services identity 2. A PSTN phone number owned by your Azure Communication Services resource
Given these examples, the following scenarios trigger an `IncomingCall` event se
| Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer > [!NOTE]
-> An important concept to remember is that an Azure Communication Services identity can be a user or application. Although there is no ability to explicitly assign an identity to a user or application in the platform, this can be done by your own application or supporting infrastructure. Please review the [identity concepts guide](../identity-model.md) for more information on this topic.
+> It's important to understand that an Azure Communication Services identity can represent either a user or an application. While the platform does not have a built-in feature to explicitly assign an identity to a user or application, your application or supporting infrastructure can accomplish this. To learn more about this topic, refer to the [identity concepts guide](../identity-model.md).
## Register an Event Grid resource provider
If you haven't previously used Event Grid in your Azure subscription, you might
## Receiving an incoming call notification from Event Grid
-Since Azure Communication Services relies on Event Grid to deliver the `IncomingCall` notification through a subscription, how you choose to handle the notification is up to you. Additionally, since the Call Automation API relies specifically on Webhook callbacks for events, a common Event Grid subscription used would be a 'Webhook'. However, you could choose any one of the available subscription types offered by the service.
+In Azure Communication Services, receiving an `IncomingCall` notification is made possible through an Event Grid subscription. As the receiver of the notification, you have the flexibility to choose how to handle it. Since the Call Automation API leverages Webhook callbacks for events, it's common to use a 'Webhook' Event Grid subscription. However, the service offers various subscription types, and you have the liberty to choose the most suitable one for your needs.
This architecture has the following benefits:
This architecture has the following benefits:
- PSTN number assignment and routing logic can exist in your application versus being statically configured online. - As identified in the [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs.
-To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
+For a sample payload of the event and more information on other calling events published to Event Grid, refer to this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
Here is an example of an Event Grid Webhook subscription where the event type filter is listening only to the `IncomingCall` event. ![Image showing IncomingCall subscription.](./media/subscribe-incoming-call-event-grid.png)
-## Call routing in Call Automation or Event Grid
+## Call routing options with Call Automation and Event Grid
-You can use [advanced filters](../../../event-grid/event-filtering.md) in your Event Grid subscription to subscribe to an `IncomingCall` notification for a specific source/destination phone number or Azure Communication Services identity and sent it to an endpoint such as a Webhook subscription. That endpoint application can then make a decision to **redirect** the call using the Call Automation SDK to another Azure Communication Services identity or to the PSTN.
+In Call Automation and Event Grid, call routing can be tailored to your specific needs. By using [advanced filters](../../../event-grid/event-filtering.md) within your Event Grid subscription, you can subscribe to an `IncomingCall` notification that pertains to a specific source/destination phone number or an Azure Communication Services identity. This notification can then be directed to an endpoint, such as a Webhook subscription. Using the Call Automation SDK, the endpoint application can then make a decision to **redirect** the call to another Azure Communication Services identity or to the PSTN.
> [!NOTE]
-> In many cases you will want to configure filtering in Event Grid due to the scenarios described above generating an `IncomingCall` event so that your application only receives events it should be responding to. For example, if you want to redirect an inbound PSTN call to an ACS endpoint and you don't use a filter, your Event Grid subscription will receive two `IncomingCall` events; one for the PSTN call and one for the ACS user even though you had not intended to receive the second notification. Failure to handle these scenarios using filters or some other mechanism in your application can cause infinite loops and/or other undesired behavior.
+> To ensure that your application receives only the necessary events, it is recommended to configure filtering in Event Grid. This is particularly crucial in scenarios that generate `IncomingCall` events, such as redirecting an inbound PSTN call to an Azure Communication Services endpoint. If a filter isn't used, your Event Grid subscription receives two `IncomingCall` events - one for the PSTN call and one for the Azure Communication Services user - even though you intended to receive only the first notification. Neglecting to handle such scenarios using filters or other mechanisms in your application can result in infinite loops and other undesirable behavior.
Here is an example of an advanced filter on an Event Grid subscription watching for the `data.to.PhoneNumber.Value` string starting with a PSTN phone number of `+18005551212.
Here is an example of an advanced filter on an Event Grid subscription watching
## Number assignment
-Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you can maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime.
+When using the `IncomingCall` notification in Azure Communication Services, you have the freedom to associate any particular number with any endpoint. For example, if you obtained a PSTN phone number of `+14255551212` and wish to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you should maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent that matches the phone number in the **to** field, you can invoke the `Redirect` API and provide the user's identity. In other words, you can manage the number assignment within your application and route or answer calls at runtime.
## Best Practices
-1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you're facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md).
-2. Upon the receipt of an incoming call event, if your application doesn't respond back with 200Ok to Event Grid in time, Event Grid uses exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that won't work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
-
-3. We recommend you to enable logging for your Event Grid resource to monitor events that failed to deliver. Navigate to the system topic under Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in 'AegDeliveryFailureLogs' table.
+1. To ensure that Event Grid delivers events to your Webhook endpoint and prevents malicious users from flooding your endpoint with events, you need to prove ownership of your endpoint. To address any issues with receiving events, confirm that the Webhook you configured is verified by handling `SubscriptionValidationEvent`. For more information, refer to this [guide](../../../event-grid/webhook-event-delivery.md).
+2. When an incoming call event is received, if your application fails to respond back with a 200Ok status code to Event Grid within the required time frame, Event Grid utilizes exponential backoff retry to send the event again. However, an incoming call only rings for 30 seconds, and responding to a call after that time won't be effective. To prevent retries for expired or stale calls, we recommend setting the retry policy as Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. You can find these settings under the Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md).
+3. We recommend you to enable logging for your Event Grid resource to monitor events that fail to deliver. To do this, navigate to the system topic under the Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in the 'AegDeliveryFailureLogs' table.
```sql AegDeliveryFailureLogs
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
description: Conceptual information about playing audio in call using Call Autom
Previously updated : 09/06/2022 Last updated : 08/11/2023 # Playing audio in call
-The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide Azure Communication Services access to your pre-recorded audio files with support for authentication.
+The play action provided through the Azure Communication Services Call Automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. You can play audio to call participants through one of two methods;
+- Providing Azure Communication Services access to prerecorded audio files of WAV format, that ACS can access with support for authentication
+- Regular text that can be converted into speech output through the integration with Azure AI services.
+
+You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview)
> [!NOTE] > Azure Communication Services currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md).
-The Play action allows you to provide access to a pre-recorded audio file of WAV format that Azure Communication Services can access with support for authentication.
+## Prebuilt Neural Text to Speech voices
+Microsoft uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis occur simultaneously, resulting in a more fluid and natural sounding output. You can use these neural voices to make interactions with your chatbots and voice assistants more natural and engaging. There are over 100 prebuilt voices to choose from. Learn more about [Azure Text-to-Speech voices](../../../../articles/cognitive-services/Speech-Service/language-support.md).
## Common use cases
-The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications.
+The play action can be used in many ways, some examples of how developers may wish to use the play action in their applications are listed here.
### Announcements Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users.
In scenarios with IVRs and virtual assistants, you can use your application or b
The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller. ### Playing compliance messages
-As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥.
+As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call is recorded for quality purposes.ΓÇ¥.
+
+## Sample architecture for playing audio in call using Text-To-Speech (Public preview)
+
+![Diagram showing sample architecture for Play with AI.](./media/play-ai.png)
## Sample architecture for playing audio in a call
As part of compliance requirements in various industries, vendors are expected t
## Next Steps - Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users. - Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.
+- Learn about [gathering customer input](./recognize-action.md).
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
description: Conceptual information about using Recognize action to gather user
Previously updated : 09/16/2022 Last updated : 08/09/2023 # Gathering user input
-With the Recognize action developers will be able to enhance their IVR or contact center applications to gather user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action.
+With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech.
+
+**Voice recognition with speech-to-text (Public Preview)**
+
+[Azure Communications services integration with Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pretrained with dialects and phonetics representing various common domains. For more information about supported languages, see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md).
+ **DTMF**+ Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad. **DTMF events and their associated tones**
Dual-tone multifrequency (DTMF) recognition is the process of understanding tone
## Common use cases
-The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application.
+The recognize action can be used for many reasons, here are a few examples of how developers can use the recognize action in their application.
### Improve user journey with self-service prompts - **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query. - **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.
+- **Transcribe caller response** - With voice recognition you can collect user input and transcribe the audio to text and analyze it to carry out specific business action.
### Interrupt audio prompts **User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to interrupt the flow of the IVR menu and be able to chat to a human agent.
+## Sample architecture for gathering user input in a call with voice recognition
+
+[ ![Diagram showing sample architecture for Recognize AI Action.](./media/recognize-ai-flow.png) ](./media/recognize-ai-flow.png#lightbox)
## Sample architecture for gathering user input in a call
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Last updated 12/01/2021
+ # Calling capabilities supported for Teams users in Calling SDK
communication-services Phone Number Management For Argentina https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-argentina.md
+
+ Title: Phone Number Management for Argentina
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Argentina.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Argentina
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Argentina.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Argentina phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md
+
+ Title: Phone Number Management for Australia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Australia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Australia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Australia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Australia phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Austria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-austria.md
+
+ Title: Phone Number Management for Austria
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Austria.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Austria
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Austria.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Austria phone numbers are available
+| Country/Region |
+| :- |
+|Austria|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Belgium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-belgium.md
+
+ Title: Phone Number Management for Belgium
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Belgium.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Belgium
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Belgium.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Belgium phone numbers are available
+| Country/Region |
+| :- |
+|Belgium|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Brazil https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-brazil.md
+
+ Title: Phone Number Management for Brazil
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Brazil.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Brazil
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Brazil.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Brazil phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Canada https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-canada.md
+
+ Title: Phone Number Management for Canada
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Canada.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Canada
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Canada.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |General Availability | General Availability | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Canada phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Chile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-chile.md
+
+ Title: Phone Number Management for Chile
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Chile.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Chile
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Chile.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Chile phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-china.md
+
+ Title: Phone Number Management for China
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in China.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for China
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in China.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where China phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Colombia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-colombia.md
+
+ Title: Phone Number Management for Colombia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Colombia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Colombia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Colombia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Colombia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Denmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-denmark.md
+
+ Title: Phone Number Management for Denmark
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Denmark.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Denmark
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Denmark.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Denmark phone numbers are available
+| Country/region |
+| :- |
+|Denmark|
+|Ireland|
+|Italy|
+|Sweden|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Estonia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-estonia.md
+
+ Title: Phone Number Management for Estonia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Estonia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Estonia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Estonia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Alphanumeric Sender ID\* | Public Preview | - | - | - |
+
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Alphanumeric Sender ID is available
+| Country/Region |
+| :- |
+|Australia|
+|Austria|
+|Denmark|
+|Estonia|
+|France|
+|Germany|
+|Italy|
+|Latvia|
+|Lithuania|
+|Netherlands|
+|Poland|
+|Portugal|
+|Spain|
+|Sweden|
+|Switzerland|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Finland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-finland.md
+
+ Title: Phone Number Management for Finland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Finland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Finland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Finland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Finland phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For France https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-france.md
+
+ Title: Phone Number Management for France
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in France.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for France
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in France.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where France phone numbers are available
+| Country/Region |
+| :- |
+|France|
+|Italy|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-germany.md
+
+ Title: Phone Number Management for Germany
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Germany.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Germany
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Germany.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Germany phone numbers are available
+| Country/Region |
+| :- |
+|Germany|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Hong Kong https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-hong-kong.md
+
+ Title: Phone Number Management for Hong Kong
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Hong Kong.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Hong Kong
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Hong Kong.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Hong Kong phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Indonesia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-indonesia.md
+
+ Title: Phone Number Management for Indonesia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Indonesia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Indonesia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Indonesia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Indonesia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Ireland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-ireland.md
+
+ Title: Phone Number Management for Ireland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Ireland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Ireland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Ireland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Ireland phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Israel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-israel.md
+
+ Title: Phone Number Management for Israel
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Israel.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Israel
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Israel.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Israel phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Italy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-italy.md
+
+ Title: Phone Number Management for Italy
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Italy.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Italy
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Italy.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free*** |- | - | General Availability | General Availability\* |
+| Local*** | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+\*** Phone numbers from Italy can only be purchased for own use. Reselling or suballocating to another party is not allowed.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Italy phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Japan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-japan.md
+
+ Title: Phone Number Management for Japan
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Japan.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Japan
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Japan.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| National | - | - | Public Preview | Public Preview\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Japan phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Latvia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-latvia.md
+
+ Title: Phone Number Management for Latvia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Latvia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Latvia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Latvia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Alphanumeric Sender ID\* | Public Preview | - | - | - |
++
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+
+\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Alphanumeric Sender ID is available
+| Country/Region |
+| :- |
+|Australia|
+|Austria|
+|Denmark|
+|Estonia|
+|France|
+|Germany|
+|Italy|
+|Latvia|
+|Lithuania|
+|Netherlands|
+|Poland|
+|Portugal|
+|Spain|
+|Sweden|
+|Switzerland|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Lithuania https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-lithuania.md
+
+ Title: Phone Number Management for Lithuania
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Lithuania.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Lithuania
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Lithuania.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Alphanumeric Sender ID\* | Public Preview\* | - | - | - |
++
+\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+
+\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Alphanumeric Sender ID is available
+| Country/Region |
+| :- |
+|Australia|
+|Austria|
+|Denmark|
+|Estonia|
+|France|
+|Germany|
+|Italy|
+|Latvia|
+|Lithuania|
+|Netherlands|
+|Poland|
+|Portugal|
+|Spain|
+|Sweden|
+|Switzerland|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Luxembourg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-luxembourg.md
+
+ Title: Phone Number Management for Luxembourg
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Luxembourg.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Luxembourg
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Luxembourg.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
++
+## Azure subscription billing locations where Luxembourg phone numbers are available
+| Country/Region |
+| :- |
+|Luxembourg|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-malaysia.md
+
+ Title: Phone Number Management for Malaysia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Malaysia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Malaysia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Malaysia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Malaysia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Mexico https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-mexico.md
+
+ Title: Phone Number Management for Mexico
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Mexico.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Mexico
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Mexico.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Mexico phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Netherlands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-netherlands.md
+
+ Title: Phone Number Management for Netherlands
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Netherlands.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Netherlands
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Netherlands.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Netherlands phone numbers are available
+| Country/Region |
+| :- |
+|Netherlands|
+|United States*|
+
+\*Alphanumeric Sender ID only
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For New Zealand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-new-zealand.md
+
+ Title: Phone Number Management for New Zealand
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in New Zealand.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for New Zealand
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in New Zealand.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where New Zealand phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Norway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-norway.md
+
+ Title: Phone Number Management for Norway
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Norway.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Norway
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Norway.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Norway phone numbers are available
+| Country/Region |
+| :- |
+|Norway|
+|France|
+|Sweden|
+++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Philippines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-philippines.md
+
+ Title: Phone Number Management for Philippines
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Philippines.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Philippines
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Philippines.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Philippines phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Poland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-poland.md
+
+ Title: Phone Number Management for Poland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Poland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Poland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Poland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
+| Alphanumeric Sender ID\** | Public Preview | - | - | - |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Poland phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Portugal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-portugal.md
+
+ Title: Phone Number Management for Portugal
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Portugal.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Portugal
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Portugal.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Portugal phone numbers are available
+| Country/Region |
+| :- |
+|Portugal|
+|United States*|
+
+\*Alphanumeric Sender ID only
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Saudi Arabia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-saudi-arabia.md
+
+ Title: Phone Number Management for Saudi Arabia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Saudi Arabia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Saudi Arabia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Saudi Arabia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Saudi Arabia phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Singapore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-singapore.md
+
+ Title: Phone Number Management for Singapore
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Singapore.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Singapore
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Singapore.
++
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Singapore phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Slovakia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-slovakia.md
+
+ Title: Phone Number Management for Slovakia
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Slovakia.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Slovakia
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Slovakia.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
++
+## Azure subscription billing locations where Slovakia phone numbers are available
+| Country/Region |
+| :- |
+|Slovakia|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For South Africa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-south-africa.md
+
+ Title: Phone Number Management for South Africa
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in South Africa.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for South Africa
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in South Africa.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where South Africa phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For South Korea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-south-korea.md
+
+ Title: Phone Number Management for South Korea
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in South Korea.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for South Korea
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in South Korea.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where South Korea phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Spain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-spain.md
+
+ Title: Phone Number Management for Spain
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Spain.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Spain
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Spain.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Spain phone numbers are available
+| Country/Region |
+| :- |
+|Spain|
+|United States*|
+
+\* Alphanumeric Sender ID only
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Sweden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-sweden.md
+
+ Title: Phone Number Management for Sweden
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Sweden.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Sweden
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Sweden.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Sweden phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Switzerland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-switzerland.md
+
+ Title: Phone Number Management for Switzerland
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Switzerland.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Switzerland
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Switzerland.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |- | - | Public Preview | Public Preview\* |
+| Local | - | - | Public Preview | Public Preview\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Switzerland phone numbers are available
+| Country/Region |
+| :- |
+|Switzerland|
+|United States*|
+
+\* Alphanumeric Sender ID
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Taiwan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-taiwan.md
+
+ Title: Phone Number Management for Taiwan
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Taiwan.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Taiwan
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Taiwan.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Taiwan phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
+++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For Thailand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-thailand.md
+
+ Title: Phone Number Management for Thailand
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Thailand.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for Thailand
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Thailand.
++
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where Thailand phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For United Arab Emirates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-arab-emirates.md
+
+ Title: Phone Number Management for United Arab Emirates
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United Arab Emirates.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for United Arab Emirates
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United Arab Emirates.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free | - | - | - | Public Preview\* |
++
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
++
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where United Arab Emirates phone numbers are available
+| Country/Region |
+| :- |
+|Australia|
+|Canada|
+|France|
+|Germany|
+|Italy|
+|Japan|
+|Spain|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For United Kingdom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-kingdom.md
+
+ Title: Phone Number Management for United Kingdom
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United Kingdom.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for United Kingdom
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United Kingdom.
++
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |General Availability | General Availability | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go |
+| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where United Kingdom phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
++
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Phone Number Management For United States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-states.md
+
+ Title: Phone Number Management for United States
+
+description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United States.
+++++ Last updated : 03/30/2023+++++
+# Phone number management for United States
+Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United States.
+
+## Number types and capabilities availability
+
+| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
+| :- | :- | :- | :- | : |
+| Toll-Free |General Availability | General Availability | General Availability | General Availability\* |
+| Local | - | - | General Availability | General Availability\* |
+
+\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
+
+## Subscription eligibility
+
+To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location.
+
+More details on eligible subscription types are as follows:
+
+| Number Type | Eligible Azure Agreement Type |
+| :- | :-- |
+| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
+| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
+
+\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
++
+## Azure subscription billing locations where United States phone numbers are available
+| Country/Region |
+| :- |
+|Canada|
+|Denmark|
+|Ireland|
+|Italy|
+|Puerto Rico|
+|Sweden|
+|United Kingdom|
+|United States|
+
+## Find information about other countries/regions
++
+## Next steps
+
+For more information about Azure Communication Services' telephony options, see the following pages:
+
+- [Learn more about Telephony](../telephony/telephony-concept.md)
+- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
# Country/regional availability of telephone numbers and subscription eligibility
-Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them.
+Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them. The capabilities and numbers that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements.
++
+**Use the drop-down to select the country/region where you're getting numbers. You'll find information about availability, restrictions and other related info on the country specific page**
+> [!div class="op_single_selector"]
+>
+> - [Argentina](../numbers/phone-number-management-for-argentina.md)
+> - [Australia](../numbers/phone-number-management-for-australia.md)
+> - [Austria](../numbers/phone-number-management-for-austria.md)
+> - [Belgium](../numbers/phone-number-management-for-belgium.md)
+> - [Brazil](../numbers/phone-number-management-for-brazil.md)
+> - [Canada](../numbers/phone-number-management-for-canada.md)
+> - [Chile](../numbers/phone-number-management-for-chile.md)
+> - [China](../numbers/phone-number-management-for-china.md)
+> - [Colombia](../numbers/phone-number-management-for-colombia.md)
+> - [Denmark](../numbers/phone-number-management-for-denmark.md)
+> - [Estonia](../numbers/phone-number-management-for-estonia.md)
+> - [Finland](../numbers/phone-number-management-for-finland.md)
+> - [France](../numbers/phone-number-management-for-france.md)
+> - [Germany](../numbers/phone-number-management-for-germany.md)
+> - [Hong Kong](../numbers/phone-number-management-for-hong-kong.md)
+> - [Indonesia](../numbers/phone-number-management-for-indonesia.md)
+> - [Ireland](../numbers/phone-number-management-for-ireland.md)
+> - [Israel](../numbers/phone-number-management-for-israel.md)
+> - [Italy](../numbers/phone-number-management-for-italy.md)
+> - [Japan](../numbers/phone-number-management-for-japan.md)
+> - [Latvia](../numbers/phone-number-management-for-latvia.md)
+> - [Lithuania](../numbers/phone-number-management-for-lithuania.md)
+> - [Luxembourg](../numbers/phone-number-management-for-luxembourg.md)
+> - [Malaysia](../numbers/phone-number-management-for-malaysia.md)
+> - [Mexico](../numbers/phone-number-management-for-mexico.md)
+> - [Netherlands](../numbers/phone-number-management-for-netherlands.md)
+> - [New Zealand](../numbers/phone-number-management-for-new-zealand.md)
+> - [Norway](../numbers/phone-number-management-for-norway.md)
+> - [Philippines](../numbers/phone-number-management-for-philippines.md)
+> - [Poland](../numbers/phone-number-management-for-poland.md)
+> - [Portugal](../numbers/phone-number-management-for-portugal.md)
+> - [Saudi Arabia](../numbers/phone-number-management-for-saudi-arabia.md)
+> - [Singapore](../numbers/phone-number-management-for-singapore.md)
+> - [Slovakia](../numbers/phone-number-management-for-slovakia.md)
+> - [South Africa](../numbers/phone-number-management-for-south-africa.md)
+> - [South Korea](../numbers/phone-number-management-for-south-korea.md)
+> - [Spain](../numbers/phone-number-management-for-spain.md)
+> - [Sweden](../numbers/phone-number-management-for-sweden.md)
+> - [Switzerland](../numbers/phone-number-management-for-switzerland.md)
+> - [Taiwan](../numbers/phone-number-management-for-taiwan.md)
+> - [Thailand](../numbers/phone-number-management-for-thailand.md)
+> - [United Arab Emirates](../numbers/phone-number-management-for-united-arab-emirates.md)
+> - [United Kingdom](../numbers/phone-number-management-for-united-kingdom.md)
+> - [United States](../numbers/phone-number-management-for-united-states.md)
-## Subscription eligibility
-
-To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired on trial accounts or by Azure free credits.
-
-More details on eligible subscription types are as follows:
-
-| Number Type | Eligible Azure Agreement Type |
-| :- | :-- |
-| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go |
-| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go |
-| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go |
-
-\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed.
-
-\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Create a support ticket or reach out to acstns@microsoft.com for assistance with your application.
-
-## Number capabilities and availability
-
-The capabilities and numbers that are available to you depend on the country/region that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country/region due to regulatory requirements.
-
-The following tables summarize current availability:
-
-## Customers with Australia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Australia, Germany, Netherlands, United Kingdom, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Austria Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Austria | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Austria | Local** | - | - | Public Preview | Public Preview\* |
-| Austria, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in Austria can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Belgium Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Belgium | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Belgium | Local | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-## Customers with Canada Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Denmark Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| Denmark, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-## Customers with Estonia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Estonia, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with France Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| France | Local** | - | - | Public Preview | Public Preview\* |
-| France | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Norway | Local** | - | - | Public Preview | Public Preview\* |
-| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
-| France, Germany, Netherlands, United Kingdom, Australia, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in France can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Germany Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Germany | Local | - | - | Public Preview | Public Preview\* |
-| Germany | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Alphanumeric sender ID in Netherlands can only be purchased for own use. Reselling or suballocating to another party is not allowed. Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Ireland Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Ireland, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Italy Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| : | :-- | : | : | :- | : |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| France | Local** | - | - | Public Preview | Public Preview\* |
-| France | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Italy, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers from Italy, France can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Latvia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Latvia, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Lithuania Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Lithuania, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Luxembourg Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Luxembourg | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Luxembourg | Local | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-## Customers with Netherlands Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Netherlands | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Netherlands | Local | - | - | Public Preview | Public Preview\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Netherlands, Germany, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Alphanumeric sender ID in Netherlands can only be purchased for own use. Reselling or suballocating to another party is not allowed. Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Norway Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Norway | Local** | - | - | Public Preview | Public Preview\* |
-| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in Norway can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-## Customers with Poland Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Poland, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - |
-
-\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Portugal Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Portugal | Toll-Free** | - | - | Public Preview | Public Preview\* |
-| Portugal | Local** | - | - | Public Preview | Public Preview\* |
-| Portugal, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Phone numbers in Portugal can only be purchased for own use. Reselling or suballocating to another party is not allowed.
-
-\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Slovakia Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Slovakia | Local | - | - | Public Preview | Public Preview\* |
-| Slovakia | Toll-Free | - | - | Public Preview | Public Preview\* |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-## Customers with Spain Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Spain | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Spain | Local | - | - | Public Preview | Public Preview\* |
-| Spain, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Sweden Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Sweden | Toll-Free | - | - | General Availability | General Availability\* |
-| Sweden | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Ireland | Toll-Free | - | - | General Availability | General Availability\* |
-| Ireland | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| Italy | Toll-Free** | - | - | General Availability | General Availability\* |
-| Italy | Local** | - | - | General Availability | General Availability\* |
-| Norway | Local** | - | - | Public Preview | Public Preview\* |
-| Norway | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Sweden, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with Switzerland Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :-- | :- | :- | :- | : |
-| Switzerland | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Switzerland | Local | - | - | Public Preview | Public Preview\* |
-| Switzerland, Germany, Netherlands, United Kingdom, Australia, France, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with United Kingdom Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :-- | :- | :- | :- | : | : |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - | General Availability | General Availability\* |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| United Kingdom, Germany, Netherlands, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - |
--
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
-
-## Customers with United States Azure billing addresses
-
-| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls |
-| :- | :- | :- | :- | :- | : |
-| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* |
-| USA | Short-Codes\** | General Availability | General Availability | - | - |
-| UK | Toll-Free | - | - | General Availability | General Availability\* |
-| UK | Local | - | - |
-| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* |
-| Canada | Local | - | - | General Availability | General Availability\* |
-| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* |
-| Denmark | Local | - | - | Public Preview | Public Preview\* |
-| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID\** | Public Preview | - | - | - |
-
-\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
-
-\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service.
## Next steps
-For more information about Azure Communication Services' telephony options please see the following pages:
+For more information about Azure Communication Services' telephony options, see the following pages
- [Learn more about Telephony](../telephony/telephony-concept.md) - Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md)
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+## Australia telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 16.00/mo |
+
+### Usage charges
+|Number type |To make calls* |To receive calls|
+|-||-|
+|Geographic |Starting at USD 0.0240/min |USD 0.0100/min |
+|Toll-free |Starting at USD 0.0240/min |USD 0.1750/min |
+
+\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv)
+
+## China telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 54.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.3168/min |
+
+## Finland telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 40.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.1888/min |
+
+## Hong Kong telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.0672/min |
+
+## Israel telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 15.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.1344/min |
+
+## New Zealand telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 40.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.0666/min |
+
+## Poland telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 22.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.1125/min |
+
+## Singapore telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 22.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.0650/min |
+
+## Taiwan telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 5.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2718/min |
+
+## Thailand telephony offers
+
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2377/min |
+++ *** Note: Pricing for all countries/regions is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees.
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
zone_pivot_groups: acs-js-csharp-java-python-+ # Quickstart: Set up and manage access tokens for Teams users
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
In this section you learned how to:
You may also want to: - Learn about [rooms concept](../../concepts/rooms/room-concept.md) - Learn about [voice and video calling concepts](../../concepts/voice-video-calling/about-call-types.md)
+ - Review Azure Communication Services [samples](../../samples/overview.md)
communication-services Join Rooms Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/join-rooms-call.md
Title: Quickstart - Join a room call
description: In this quickstart, you'll learn how to join a room call using web or native mobile calling SDKs --++ - Previously updated : 07/27/2022+ Last updated : 07/20/2023
-zone_pivot_groups: acs-web-ios-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
- # Quickstart: Join a room call + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md). - Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)
+- A created room and participant added to it. [Create and manage rooms](get-started-rooms.md)
+ ## Obtain user access token If you have already created users and have added them as participants in the room following the "Set up room participants" section in [this page](./get-started-rooms.md), then you can directly use those users to join the room.
az communication identity token issue --scope voip --connection-string "yourConn
For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli). ---
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)
-## Obtain user access token
-You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token.
-```azurecli-interactive
-az communication identity token issue --scope voip --connection-string "yourConnectionString"
-```
-For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli).
[!INCLUDE [Join a room call from iOS calling SDK](./includes/rooms-quickstart-call-ios.md)]+ ::: zone-end ::: zone pivot="platform-android" -
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)-
-## Obtain user access token
-You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token.
-```azurecli-interactive
-az communication identity token issue --scope voip --connection-string "yourConnectionString"
-```
-For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli).
::: zone-end ## Next steps
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
# Quickstart: Join your calling app to a Teams Auto Attendant + In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Auto Attendant. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
# Quickstart: Join your calling app to a Teams call queue + In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Call Queue. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
Title: Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services
+ Title: Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services
description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform.
-# Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services
+# Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services
-The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Graph APIs and ACS UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation.
+The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Microsoft Graph APIs and Azure Communication Services UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation.
## What are virtual events and event management platforms?
-Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
## What are the building blocks of an event management platform?
Throughout the rest of this tutorial, we will focus on how using Azure Communica
Microsoft Graph enables event management platforms to empower organizers to schedule and manage their events directly through the event management platform. For attendees, event management platforms can build custom registration flows right on their platform that registers the attendee for the event and generates unique credentials for them to join the Teams hosted event. >[!NOTE]
->For each required Graph API has different required scopes, ensure that your application has the correct scopes to access the data.
+>For each required Microsoft Graph API has different required scopes, ensure that your application has the correct scopes to access the data.
### Scheduling registration-enabled events with Microsoft Graph
-1. Authorize application to use Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees.
+1. Authorize application to use Microsoft Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees.
1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders. 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md).
- 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
+ 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
4. Refresh tokens can be revoked in the event of a breach or account termination
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
description: This article guides you through how to deploy an Azure Communicatio
+ Last updated 05/05/2023
You now need to wait for your resource to be provisioned and connected to the Mi
## Next steps -- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)
+- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
description: Learn how to complete the prerequisite tasks required to deploy Azu
+ Last updated 05/05/2023
Wait for confirmation that Azure Communications Gateway is enabled before moving
## Next steps -- [Create an Azure Communications Gateway resource](deploy.md)
+- [Create an Azure Communications Gateway resource](deploy.md)
confidential-computing Harden A Linux Image To Remove Azure Guest Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/harden-a-linux-image-to-remove-azure-guest-agent.md
+
+ Title: Harden a Linux image to remove Azure guest agent
+description: Learn how to use the Azure CLI to harden a linux image to remove Azure guest agent.
++
+m
++ Last updated : 8/03/2023++++
+# Harden a Linux image to remove Azure guest agent
+
+**Applies to:** :heavy_check_mark: Linux Images
+
+Azure supports two provisioning agents [cloud-init](https://github.com/canonical/cloud-init), and the [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) (WALA), which forms the prerequisites for creating the [generalized images](/azure/virtual-machines/generalize#linux) (Azure Compute Gallery or Managed Image). The Azure Linux Agent contains Provisioning Agent code and Extension Handling code in one package.
+
+It's crucial to comprehend what functionalities the VM loses before deciding to remove the Azure Linux Agent. Removal of the guest agent removes the functionality enumerated atΓÇ»[Azure Linux Agent](/azure/virtual-machines/extensions/agent-linux?branch=pr-en-us-247336).
+
+This "how to" shows you steps to remove guest agent from the Linux image.
+## Prerequisites
+
+- If you don't have an Azure subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Ubuntu image - you can choose one from the [Azure Marketplace](/azure/virtual-machines/linux/cli-ps-findimage).
+
+### Remove Azure Linux Agent and prepare a generalized Linux image
+
+Steps to create an image that removes the Azure Linux Agent are as follows:
+
+1. Download an Ubuntu image.
+
+ [How to download a Linux VHD from Azure](/azure/virtual-machines/linux/download-vhd?tabs=azure-portal)
+
+2. Mount the image.
+
+ Follow the instructions in step 2 of [remove sudo users from the Linux Image](/azure/confidential-computing/harden-the-linux-image-to-remove-sudo-users) to mount the image.
+
+3. Remove the Azure Linux agent
+
+ Run as root to [remove the Azure Linux Agent](/azure/virtual-machines/linux/disable-provisioning)
+
+ For Ubuntu 18.04+
+ ```
+ sudo chroot /mnt/dev/$imagedevice/ apt -y remove walinuxagent
+ ```
++
+> [!NOTE]
+> If you know you will not reinstall the Linux Agent again [remove the Azure Linux Agent artifacts](/azure/virtual-machines/linux/disable-provisioning#:~:text=Step%202%3A%20(Optional)%20Remove%20the%20Azure%20Linux%20Agent%20artifacts), you can run the following steps.
++
+4. (Optional) Remove the Azure Linux Agent artifacts.
+
+ If you know you will not reinstall the Linux Agent again, then you can run the following else skip this step:
+
+ For Ubuntu 18.04+
+ ```
+ sudo chroot /mnt/dev/$imagedevice/ rm -rf /var/lib/walinuxagent
+ sudo chroot /mnt/dev/$imagedevice/ rm -rf /etc/ walinuxagent.conf
+ sudo chroot /mnt/dev/$imagedevice/ rm -rf /var/log/ walinuxagent.log
+ ```
+
+5. Create a systemd service to provision the VM.
+
+ Since we are removing the Azure Linux Agent, we need to provide a mechanism to report ready. Copy the contents of the bash script or python script located [here](/azure/virtual-machines/linux/no-agent?branch=pr-en-us-247336#add-required-code-to-the-vm) to the mounted image and make the file executable (i.e, grant execute permission on the file - chmod).
+ ```
+ sudo chmod +x /mnt/dev/$imagedevice/usr/local/azure-provisioning.sh
+ ```
+
+ To ensure report ready mechanism, create a [systemd service unit](/azure/virtual-machines/linux/no-agent#:~:text=Automating%20running%20the%20code%20at%20first%20boot)
+ and add the following to the /etc/systemd/system (this example names the unit file azure-provisioning.service)
+ ```
+ sudo chroot /mnt/dev/$imagedevice/ systemctl enable azure-provisioning.service
+ ```
+ Now the image is generalized and can be used to create a VM.
+
+6. Unmount the image.
+ ```
+ umount /mnt/dev/$imagedevice
+ ```
+
+ The image prepared does not include Azure Linux Agent anymore.
+
+7. Use the prepared image to deploy a confidential VM.
+
+ Follow the steps starting from 4 in the [Create a custom image for Azure confidential VM](/azure/confidential-computing/how-to-create-custom-image-confidential-vm) document to deploy the agent-less confidential VM.
+
+> [!NOTE]
+> If you are looking to deploy cvm scaled scale using the custom image, please note that some features related to auto scaling will be restricted. Will manual scaling rules continue to work as expected, the autoscaling ability will be limited due to the agentless custom image. More details on the restrictions can be found here for the [provisioning agent](/azure/virtual-machines/linux/disable-provisioning). Alternatively, you can navigate to the metrics tab on the azure portal and confirm the same.
+
+## Next Steps
+
+[Create a custom image for Azure confidential VM](/azure/confidential-computing/how-to-create-custom-image-confidential-vm)
confidential-computing How To Leverage Virtual Tpms In Azure Confidential Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-leverage-virtual-tpms-in-azure-confidential-vms.md
These steps list out which artifacts you need and how to get them:
The AMD Versioned Chip Endorsement Key (VCEK) is used to sign the AMD SEV-SNP report. The VCEK certificate allows you to verify that the report was signed by a genuine AMD CPU key. There are two ways retrieve the certificate:
- a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known IMDS endpoint:
+ a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service) (IMDS) endpoint:
```bash curl -H Metadata:true http://169.254.169.254/metadat/certification > vcek cat ./vcek | jq -r '.vcekCert , .certificateChain' > ./vcek.pem
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Last updated 04/12/2023 -+ ms.devlang: azurecli
confidential-computing Quick Create Confidential Vm Portal Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md
Last updated 3/27/2022 -+ # Quickstart: Create confidential VM on AMD in the Azure portal
connectors File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/file-system.md
+
+ Title: Connect to on-premises file systems
+description: Connect to file systems on premises from workflows in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 08/17/2023++
+# Connect to on-premises file systems from workflows in Azure Logic Apps
++
+This how-to guide shows how to access an on-premises file share from a workflow in Azure Logic Apps by using the File System connector. You can then create automated workflows that run when triggered by events in your file share or in other systems and run actions to manage your files. The connector provides the following capabilities:
+
+- Create, get, append, update, and delete files.
+- List files in folders or root folders.
+- Get file content and metadata.
+
+In this how-to guide, the example scenarios demonstrate the following tasks:
+
+- Trigger a workflow when a file is created or added to a file share, and then send an email.
+- Trigger a workflow when copying a file from a Dropbox account to a file share, and then send an email.
+
+## Limitations and known issues
+
+- The File System connector currently supports only Windows file systems on Windows operating systems.
+- Mapped network drives aren't supported.
+
+## Connector technical reference
+
+The File System connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
+
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector differs in the following ways: <br><br>- The built-in connector supports only Standard logic apps that run in an App Service Environment v3 with Windows plans only. <br><br>- The built-in version can connect directly to a file share and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [File System built-in connector reference](/azure/logic-apps/connectors/built-in/reference/filesystem/) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
+
+## Prerequisites
+
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* To connect to your file share, different requirements apply, based on your logic app and the hosting environment:
+
+ - Consumption logic app workflows
+
+ - In multi-tenant Azure Logic Apps, you need to meet the following requirements, if you haven't already:
+
+ 1. [Install the on-premises data gateway on a local computer](../logic-apps/logic-apps-gateway-install.md).
+
+ The File System managed connector requires that your gateway installation and file system server must exist in the same Windows domain.
+
+ 1. [Create an on-premises data gateway resource in Azure](../logic-apps/logic-apps-gateway-connection.md).
+
+ 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system.
+
+ - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector.
+
+ - Standard logic app workflows
+
+ You can use the File System built-in connector or managed connector.
+
+ * To use the File System managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+
+ * To use the File System built-in connector, your Standard logic app workflow must run in App Service Environment v3, but doesn't require the data gateway resource.
+
+* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer.
+
+* To follow the example scenario in this how-to guide, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This example uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
+
+ > [!IMPORTANT]
+ >
+ > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps.
+ > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can
+ > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application).
+ > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
+
+* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free.
+
+* The logic app workflow where you want to access your file share. To start your workflow with a File System trigger, you have to start with a blank workflow. To add a File System action, start your workflow with any trigger.
+
+<a name="add-file-system-trigger"></a>
+
+## Add a File System trigger
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ For more information, see [File System triggers](/connectors/filesystem/#triggers). This example continues with the trigger named **When a file is created**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector trigger:
+
+ ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/file-system-connection-consumption.png)
+
+ The following example shows the connection information for the File System ISE-based trigger:
+
+ ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/connect-file-systems/file-system-connection-ise.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
+
+ ![Screenshot showing Consumption workflow designer and the trigger named When a file is created.](media/connect-file-systems/trigger-file-system-when-file-created-consumption.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Consumption workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-file-system-send-email-consumption.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow sends an email about the new file.
+
+### [Standard](#tab/standard)
+
+#### Built-in connector trigger
+
+The following steps apply only to Standard logic app workflows in an App Service Environment v3 with Windows plans only.
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** built-in trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ For more information, see [File System triggers](/azure/logic-apps/connectors/built-in/reference/filesystem/#triggers). This example continues with the trigger named **When a file is added**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+
+ The following example shows the connection information for the File System built-in connector trigger:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/connect-file-systems/trigger-file-system-connection-built-in-standard.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check.
+
+ ![Screenshot showing Standard workflow designer and information for the trigger named When a file is added.](media/connect-file-systems/trigger-when-file-added-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is added, and action named Send an email.](media/connect-file-systems/trigger-send-email-built-in-standard.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes.
+ > After the dynamic content list and expression editor options appear, select the dynamic content
+ > list (lightning icon). When the dynamic content list appears, select from the available outputs.
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow sends an email about the new file.
+
+#### Managed connector trigger
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** managed trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+
+ For more information, see [File System triggers](/connectors/filesystem/#triggers). This example continues with the trigger named **When a file is created**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector trigger:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/trigger-file-system-connection-managed-standard.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
+
+ ![Screenshot showing Standard workflow designer and managed connector trigger named When a file is created.](media/connect-file-systems/trigger-when-file-created-managed-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-send-email-managed-standard.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes.
+ > After the dynamic content list and expression editor options appear, select the dynamic content
+ > list (lightning icon). When the dynamic content list appears, select from the available outputs.
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow sends an email about the new file.
+++
+<a name="add-file-system-action"></a>
+
+## Add a File System action
+
+The example logic app workflow starts with the [Dropbox trigger](/connectors/dropbox/#triggers), but you can use any trigger that you want.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ For more information, see [File System triggers](/connectors/filesystem/#actions). This example continues with the action named **Create file**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector action:
+
+ ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/file-system-connection-consumption.png)
+
+ The following example shows the connection information for the File System ISE-based connector action:
+
+ ![Screenshot showing connection information for File System ISE-based connector action.](media/connect-file-systems/file-system-connection-ise.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action.
+
+ For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox.
+
+ ![Screenshot showing Consumption workflow designer and the File System managed connector action named Create file.](media/connect-file-systems/action-file-system-create-file-consumption.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, select inside the action's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-consumption.png)
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+
+### [Standard](#tab/standard)
+
+#### Built-in connector action
+
+These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only.
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ For more information, see [File System actions](/azure/logic-apps/connectors/built-in/reference/filesystem/#actions). This example continues with the action named **Create file**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+
+ The following example shows the connection information for the File System built-in connector action:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/connect-file-systems/action-file-system-connection-built-in-standard.png)
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action. For this example, follow these steps:
+
+ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
+
+ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
+
+ 1. After the **File content** parameter appears on the action information pane, select inside the parameter's edit box.
+
+ 1. After the dynamic content list and expression editor options appear, select the dynamic content list (lightning icon). From the list that appears, under the **When a file is created** trigger section, select **File Content**.
+
+ When you're done, the **File Content** trigger output appears in the **File content** parameter:
+
+ ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-built-in-standard.png)
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+
+#### Managed connector action
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+ For more information, see [File System actions](/connectors/filesystem/#actions). This example continues with the action named **Create file**.
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector action:
+
+ ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/action-file-system-connection-managed-standard.png)
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action. For this example, follow these steps:
+
+ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
+
+ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
+
+ 1. After the **File content** parameter appears on the action information pane, select inside the parameter's edit box.
+
+ 1. After the dynamic content list and expression editor options appear, select the dynamic content list (lightning icon). From the list that appears, under the **When a file is created** trigger section, select **File Content**.
+
+ When you're done, the **File Content** trigger output appears in the **File content** parameter:
+
+ ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-managed-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-managed-standard.png)
+
+1. When you're done, save your workflow.
+
+1. To test your workflow, upload a file, which triggers the workflow.
+
+If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+++
+## Next steps
+
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--query customerId \ --output tsv) LOG_ANALYTICS_WORKSPACE_ID_ENC=$(printf %s $LOG_ANALYTICS_WORKSPACE_ID | base64 -w0) # Needed for the next step
- lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
+ LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME \ --query primarySharedKey \ --output tsv)
- lOG_ANALYTICS_KEY_ENC=$(printf %s $lOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
+ LOG_ANALYTICS_KEY_ENC=$(printf %s $LOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
``` # [PowerShell](#tab/azure-powershell)
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--query customerId ` --output tsv) $LOG_ANALYTICS_WORKSPACE_ID_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_WORKSPACE_ID))# Needed for the next step
- $lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys `
+ $LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys `
--resource-group $GROUP_NAME ` --workspace-name $WORKSPACE_NAME ` --query primarySharedKey ` --output tsv)
- $lOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($lOG_ANALYTICS_KEY))
+ $LOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_KEY))
```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" \ --configuration-settings "logProcessor.appLogs.destination=log-analytics" \ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" \
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}"
+ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}"
``` # [PowerShell](#tab/azure-powershell)
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" ` --configuration-settings "logProcessor.appLogs.destination=log-analytics" ` --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" `
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}"
+ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}"
```
container-apps Environment Custom Dns Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-custom-dns-suffix.md
By default, an Azure Container Apps environment provides a DNS suffix in the format `<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`. Each container app in the environment generates a domain name based on this DNS suffix. You can configure a custom DNS suffix for your environment. > [!NOTE]
+>
> To configure a custom domain for individual container apps, see [Custom domain names and certificates in Azure Container Apps](custom-domains-certificates.md).
+>
+> If you configure a custom DNS suffix for your environment, traffic to FQDNs that use this suffix will resolve to the environment. FQDNs that use this suffix outside the environment will be unreachable from the environment.
## Add a custom DNS suffix and certificate
container-apps Ingress How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md
Disable ingress for your container app by omitting the `ingress` configuration p
::: zone-end
+## <a name="use-additional-tcp-ports"></a>Use additional TCP ports (preview)
+
+You can expose additional TCP ports from your application. To learn more, see the [ingress concept article](ingress-overview.md#additional-tcp-ports).
+++
+# [Azure CLI](#tab/azure-cli)
+
+Adding additional TCP ports can be done through the CLI by referencing a YAML file with your TCP port configurations.
+
+```azurecli
+az containerapp update
+ --name <app-name> \
+ --resource-group <resource-group> \
+ --yaml <your-yaml-file>
+```
+
+The following is an example YAML file you can reference in the above CLI command. The configuration for the additional TCP ports is under `additionalPortMappings`.
+
+```yml
+location: northcentralus
+name: multiport-example
+properties:
+ configuration:
+ activeRevisionsMode: Single
+ ingress:
+ additionalPortMappings:
+ - exposedPort: 21025
+ external: true
+ targetPort: 1025
+ allowInsecure: false
+ external: true
+ targetPort: 1080
+ traffic:
+ - latestRevision: true
+ weight: 100
+ transport: http
+ managedEnvironmentId: <env id>
+ template:
+ containers:
+ - image: maildev/maildev
+ name: maildev
+ resources:
+ cpu: 0.25
+ memory: 0.5Gi
+ scale:
+ maxReplicas: 1
+ minReplicas: 1
+ workloadProfileName: Consumption
+type: Microsoft.App/containerApps
+```
+++
+# [Portal](#tab/portal)
+
+This feature is not supported in the Azure portal.
+++
+# [ARM template](#tab/arm-template)
+
+The following ARM template provides an example of how you can add additional ports to your container apps. Each additional port should be added under `additionalPortMappings` within the `ingress` section for `configuration` within `properties` for the container app. The following is an example:
+
+```json
+{
+ ...
+ "properties": {
+ ...
+ "configuration": {
+ "ingress": {
+ ...
+ "additionalPortMappings": [
+ {
+ "external": false
+ "targetPort": 80
+ "exposedPort": 12000
+ }
+ ]
+ }
+ }
+ ...
+}
+```
++++ ## Next steps > [!div class="nextstepaction"]
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
HTTP ingress adds headers to pass metadata about the client request to your cont
| `X-Forwarded-Proto` | Protocol used by the client to connect with the Container Apps service. | `http` or `https` | | `X-Forwarded-For` | The IP address of the client that sent the request. | | | `X-Forwarded-Host` | The host name the client used to connect with the Container Apps service. | |
-| `X-Forwarded-Client-Cert` | The client certificate if `clientCertificateMode` is set. | Semicolon seperated list of Hash, Cert, and Chain. For example: `Hash=....;Cert="...";Chain="...";` |
+| `X-Forwarded-Client-Cert` | The client certificate if `clientCertificateMode` is set. | Semicolon separated list of Hash, Cert, and Chain. For example: `Hash=....;Cert="...";Chain="...";` |
### <a name="tcp"></a>TCP
With TCP ingress enabled, your container app:
- Is accessible to other container apps in the same environment via its name (defined by the `name` property in the Container Apps resource) and exposed port number. - Is accessible externally via its fully qualified domain name (FQDN) and exposed port number if the ingress is set to "external".
+## <a name="additional-tcp-ports"></a>Additional TCP ports (preview)
+
+In addition to the main HTTP/TCP port for your container apps, you may expose additional TCP ports to enable applications that accept TCP connections on multiple ports. This feature is in preview.
+
+The following apply to additional TCP ports:
+- Additional TCP ports can only be external if the app itself is set as external and the container app is using a custom VNet.
+- Any externally exposed additional TCP ports must be unique across the entire Container Apps environment. This includes all external additional TCP ports, external main TCP ports, and 80/443 ports used by built-in HTTP ingress. If the additional ports are internal, the same port can be shared by multiple apps.
+- If an exposed port is not provided, the exposed port will default to match the target port.
+- Each target port must be unique, and the same target port cannot be exposed on different exposed ports.
+- There is a maximum of 5 additional ports per app. If additional ports are required, please open a support request.
+- Only the main ingress port supports built-in HTTP features such as CORS and session affinity. When running HTTP on top of the additional TCP ports, these built-in features are not supported.
+
+Visit the [how to article on ingress](ingress-how-to.md#use-additional-tcp-ports) for more information on how to enable additional ports for your container apps.
+ ## Domain names You can access your app in the following ways: -- The default fully-qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md).
+- The default fully qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md).
- A custom domain name: You can configure a custom DNS domain for your Container Apps environment. For more information, see [Custom domain names and certificates](./custom-domains-certificates.md). - The app name: You can use the app name for communication between apps in the same environment.
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
The name of the resource group created in the Azure subscription where your envi
In addition to the [Azure Container Apps billing](./billing.md), you're billed for: -- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
+- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress if using an internal or external environment, plus one standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for ingress if using an external environment. If you need more public IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).
- Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
container-instances Container Instances Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-application-gateway.md
az network application-gateway create \
--public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \
- --servers "$ACI_IP"
+ --servers "$ACI_IP" \
--priority 100 ```
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
To run certain compute-intensive workloads on Azure Container Instances, deploy
This article shows how to add GPU resources when you deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md) or [Resource Manager template](container-instances-multi-container-group.md). You can also specify GPU resources when you deploy a container instance using the Azure portal. > [!IMPORTANT]
-> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](https://learn.microsoft.com/azure/virtual-machines/nc-series-retirement) and [NCv2 Series](https://learn.microsoft.com/azure/virtual-machines/ncv2-series-retirement) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](https://learn.microsoft.com/azure/aks/aks-migration).
+> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md).
> [!IMPORTANT] > This feature is currently in preview, and some [limitations apply](#preview-limitations). Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
container-instances Container Instances Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-reference-yaml.md
The following tables describe the values you need to set in the schema.
| name | string | No | Name of the header. | | value | string | No | Value of the header. |
+> [!IMPORTANT]
+> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md).
+ ### GpuResource object | Name | Type | Required | Value | | - | - | - | - | | count | integer | Yes | The count of the GPU resource. |
-| sku | enum | Yes | The SKU of the GPU resource. - K80, P100, V100 |
+| sku | enum | Yes | The SKU of the GPU resource. - V100 |
## Next steps
container-instances Container Instances Resource And Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md
The following maximum resources are available to a container group deployed with
| V100 | 1 | 6 | 112 | 50 | | V100 | 2 | 12 | 224 | 50 | | V100 | 4 | 24 | 448 | 50 |
-<!
-| K80 | 1 | 6 | 56 | 50 |
-| K80 | 2 | 12 | 112 | 50 |
-| K80 | 4 | 24 | 224 | 50 |
-| P100, V100 | 1 | 6 | 112 | 50 |
-| P100, V100 | 2 | 12 | 224 | 50 |
-| P100, V100 | 4 | 24 | 448 | 50 |
->
## Next steps
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
Examples in this article are formatted for the Bash shell. If you prefer another
## Deploy to new virtual network > [!NOTE]
-> If you are using port 29 to have only 3 IP addresses, we recommend always to go one range above or below. For example, use port 28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able start or not able to stop states.
+> If you are using subnet IP range /29 to have only 3 IP addresses. we recommend always to go one range above (never below). For example, use subnet IP range /28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able to start, restart or even not able to stop states.
To deploy to a new virtual network and have Azure create the network resources for you automatically, specify the following when you execute [az container create][az-container-create]:
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation verify $IMAGE ``` Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message.+
+## Next steps
+
+See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/quickstarts/ratify-on-azure.md).
cosmos-db Cmk Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cmk-troubleshooting-guide.md
A troubleshooting solution, for example, would be to create a new identity with
After updating the account's default identity, you need to wait upwards to one hour for the account to stop being in revoke state. If the issue isn't resolved after more than two hours, contact customer service.
-## Customer Managed Key does not exist
+## Azure Key Vault Resource not found
### Reason for error?
-You see this error when the customer managed key isn't found on the specified Azure Key Vault.
+You see this error when the Azure Key Vault or specified Key are not found.
### Troubleshooting
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
In Azure Cosmos DB, you must explicitly configure the cross-region data replicat
**Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. For more information, see [periodic backup/restore](periodic-backup-restore-introduction.md).
+## Residency requirements for analytical store
+
+Analytical store is resident by default as it is stored in either locally redundant or zone redundant storage. To learn more, see the [analytical store](analytical-store-introduction.md) article.
++ ## Use Azure Policy to enforce the residency requirements If you have data residency requirements that require you to keep all your data in a single Azure region, you can enforce zone-redundant or locally redundant backups for your account by using an Azure Policy. You can also enforce a policy that the Azure Cosmos DB accounts are not geo-replicated to other regions.
cosmos-db How To Setup Customer Managed Keys Existing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md
+
+ Title: Configure customer-managed keys on existing accounts
+
+description: Store customer-managed keys in Azure Key Vault to use for encryption in your existing Azure Cosmos DB account with access control.
+++ Last updated : 08/17/2023++
+ms.devlang: azurecli
++
+# Configure customer-managed keys for your existing Azure Cosmos DB account with Azure Key Vault (Preview)
++
+Enabling a second layer of encryption for data at rest using [Customer Managed Keys](./how-to-setup-customer-managed-keys.md) while creating a new Azure Cosmos DB account has been Generally available for some time now. As a natural next step, we now have the capability to enable CMK on existing Azure Cosmos DB accounts.
+
+This feature eliminates the need for data migration to a new account to enable CMK. It helps to improve customersΓÇÖ security and compliance posture.
+
+> [!NOTE]
+> Currently, enabling customer-managed keys on existing Azure Cosmos DB accounts is in preview. This preview is provided without a service-level agreement. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Enabling CMK kicks off a background, asynchronous process to encrypt all the existing data in the account, while new incoming data are encrypted before persisting. There's no need to wait for the asynchronous operation to succeed. The enablement process consumes unused/spare RUs so that it doesn't affect your read/write workloads. You can refer to this [link](./how-to-setup-customer-managed-keys.md?tabs=azure-powershell#how-do-customer-managed-keys-influence-capacity-planning) for capacity planning once your account is encrypted.
+
+## Get started by enabling CMK on your existing accounts
+
+### Prerequisites
+
+All the prerequisite steps needed while configuring Customer Managed Keys for new accounts is applicable to enable CMK on your existing account. Refer to the steps [here](./how-to-setup-customer-managed-keys.md?tabs=azure-portal#prerequisites)
+
+### Steps to enable CMK on your existing account
+
+To enable CMK on an existing account, update the account with an ARM template setting a Key Vault key identifier in the keyVaultKeyUri property ΓÇô just like you would when enabling CMK on a new account. This step can be done by issuing a PATCH call with the following payload:
+
+```
+ {
+ "properties": {
+ "keyVaultKeyUri": "<key-vault-key-uri>"
+ }
+ }
+```
+
+The output of this CLI command for enabling CMK waits for the completion of encryption of data.
+
+```azurecli
+ az cosmosdb update --name "testaccount" --resource-group "testrg" --key-uri "https://keyvaultname.vault.azure.net/keys/key1"
+```
+
+### Steps to enable CMK on your existing Azure Cosmos DB account with PITR or Analytical store account
+
+For enabling CMK on existing account that has continuous backup and point in time restore enabled, we need to follow some extra steps. Follow step 1 to step 5 and then follow instructions to enable CMK on existing account.
+
+> [!NOTE]
+> System-assigned identity and continuous backup mode is currently under Public Preview and may change in the future. Currently, only user-assigned managed identity is supported for enabling CMK on continuous backup accounts.
+++
+1. Configure managed identity to your cosmos account [Configure managed identities with Azure AD for your Azure Cosmos DB account](./how-to-setup-managed-identity.md)
+
+1. Update cosmos account to set default identity to point to managed identity added in previous step
+
+ **For System managed identity :**
+ ```
+ az cosmosdb updateΓÇ»--resource-group $resourceGroupNameΓÇ» --name $accountNameΓÇ» --default- identity "SystemAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/ systemAssignedIdentities/MyID"
+ ```
+
+ **For User managed identity  :**
+
+ ```
+ az cosmosdb update -n $sourceAccountName -g $resourceGroupName --default-identity "UserAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyID"
+ ```
+
+1. Configure Keyvault as given in documentation [here](./how-to-setup-customer-managed-keys.md?tabs=azure-cli#configure-your-azure-key-vault-instance)
+
+1. Add [access policy](./how-to-setup-customer-managed-keys.md?tabs=azure-cli#using-a-managed-identity-in-the-azure-key-vault-access-policy) in the keyvault for the default identity that is set in previous step
+
+1. Update cosmos account to set keyvault uri, this update triggers enabling CMK on account  
+ ```
+ az cosmosdb update --name $accountName --resource-group $resourceGroupName --key-uri $keyVaultKeyURIΓÇ»
+ ```
+## Known limitations
+
+- Enabling CMK is available only at a Cosmos DB account level and not at collections.
+- We don't support enabling CMK on existing Azure Cosmos DB for Apache Cassandra accounts.
+- We don't support enabling CMK on existing accounts that are enabled for Materialized Views and Full Fidelity Change Feed (FFCF) as well.
+- Ensure account must not have documents with large IDs greater than 990 bytes before enabling CMK. If not, you'll get an error due to max supported limit of 1024 bytes after encryption.
+- During encryption of existing data, [control plane](./audit-control-plane-logs.md) actions such as "add region" is blocked. These actions are unblocked and can be used right after the encryption is complete.
+
+## Monitor the progress of the resulting encryption
+
+Enabling CMK on an existing account is an asynchronous operation that kicks off a background task that encrypts all existing data. As such, the REST API request to enable CMK provides in its response an "Azure-AsyncOperation" URL. Polling this URL with GET requests return the status of the overall operation, which eventually Succeed. This mechanism is fully described in [this](https://learn.microsoft.com/azure/azure-resource-manager/management/async-operations) article.
+
+The Cosmos DB account can continue to be used and data can continue to be written without waiting for the asynchronous operation to succeed. CLI command for enabling CMK waits for the completion of encryption of data.
+
+If you have further questions, reach out to Microsoft Support.
+
+## FAQs
+
+**What are the factors on which the encryption time depends?**
+
+Enabling CMK is an asynchronous operation and depends on sufficient unused RUs being available. We suggest enabling CMK during off-peak hours and if applicable you can increase RUs before hand, to speed up encryption. It's also a direct function of data size.
+
+**Do we need to brace ourselves for downtime?**
+
+Enabling CMK kicks off a background, asynchronous process to encrypt all the data. There's no need to wait for the asynchronous operation to succeed. The Azure Cosmos DB account is available for reads and writes and there's no need for a downtime.
+
+**Can you bump up the RUΓÇÖs once CMK has been triggered?**
+
+It's suggested to bump up the RUs before you trigger CMK. Once CMK is triggered, then some control plane operations are blocked till the encryption is complete. This block may prevent the user from increasing the RUΓÇÖs once CMK is triggered.
+
+**Is there a way to reverse the encryption or disable encryption after triggering CMK?**
+
+Once the data encryption process using CMK is triggered, it can't be reverted.
+
+**Will enabling encryption using CMK on existing account have an impact on data size and read/writes?**
+
+As you would expect, by enabling CMK there's a slight increase in data size and RUs to accommodate extra encryption/decryption processing.
+
+**Should you back up the data before enabling CMK?**
+
+Enabling CMK doesn't pose any threat of data loss. In general, we suggest you back up the data regularly.
+
+**Are old backups taken as a part of periodic backup encrypted?**
+
+No. Old periodic backups aren't encrypted. Newly generated backups after CMK enabled is encrypted.
+
+**What is the behavior on existing accounts that are enabled for Continuous backup (PITR)**
+
+When CMK is turned on, the encryption is turned on for continuous backups as well. All restores going forward is encrypted.
+
+**What is the behavior if CMK is enabled on PITR enabled account and we restore account to the time CMK was disabled?**
+
+In this case CMK is explicitly enabled on the restored target account for the following reasons:
+- Once CMK is enabled on the account, there's no option to disable CMK.
+- This behavior is in line with the current design of restore of CMK enabled account if periodic backup
+
+**What happens when user revokes the key while CMK migration is in-progress?**
+
+The state of the key is checked when CMK encryption is triggered. If the key in Azure Key vault is in good standing, the encryption is started and the process completes without further check. Even if the key is revoked, or Azure key vault is deleted or unavailable, the encryption process succeeds.
+
+**Can we enable CMK encryption on our existing production account?**
+
+Yes. Since the capability is currently in preview, we recommend testing all scenarios first on nonproduction accounts and once you're comfortable you can consider production accounts.
+
+## Next steps
+
+* Learn more about [data encryption in Azure Cosmos DB](database-encryption-at-rest.md).
+* You can choose to add a second layer of encryption with your own keys, to learn more, see the [customer-managed keys](how-to-setup-cmk.md) article.
+* For an overview of Azure Cosmos DB security and the latest improvements, see [Azure Cosmos DB database security](database-security.md).
+* For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Data stored in your Azure Cosmos DB account is automatically and seamlessly encr
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos DB account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account. > [!NOTE]
-> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
+> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation. Enabling customer-managed keys on your existing accounts is available for preview. You can refer to the link [here](how-to-setup-customer-managed-keys-existing-accounts.md) for more details
> [!WARNING] > The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys:
Here, create a new key using Azure Key Vault and retrieve the unique identifier.
:::image type="content" source="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" lightbox="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" alt-text="Screenshot of the dialog to create a new key.":::
- > [!TIP]
- > Alternatively, you can use the Azure CLI to generate a key with:
- >
- > ```azurecli
- > az keyvault key create \
- > --vault-name <name-of-key-vault> \
- > --name <name-of-key>
- > ```
- >
- > For more information on managing a key vault with the Azure CLI, see [manage Azure Key Vault with the Azure CLI](../key-vault/general/manage-with-cli2.md).
+ > [!TIP]
+ > Alternatively, you can use the Azure CLI to generate a key with:
+ >
+ > ```azurecli
+ > az keyvault key create \
+ > --vault-name <name-of-key-vault> \
+ > --name <name-of-key>
+ > ```
+ >
+ > For more information on managing a key vault with the Azure CLI, see [manage Azure Key Vault with the Azure CLI](../key-vault/general/manage-with-cli2.md).
1. After the key is created, select the newly created key and then its current version.
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
When removing indexed paths, you should group all your changes into one indexing
When you drop an indexed path, the query engine will immediately stop using it, and will do a full scan instead. > [!NOTE]
-> Where possible, you should always try to group multiple indexing changes into one single indexing policy modification
+> Where possible, you should always try to group multiple index removals into one single indexing policy modification.
+
+> [!IMPORTANT]
+> Removing an index takes affect immediately, whereas adding a new index takes some time as it requires an indexing transformation. When replacing one index with another (for example, replacing a single property index with a composite-index) make sure to add the new index first and then wait for the index transformation to complete **before** you remove the previous index from the indexing policy. Otherwise this will negatively affect your ability to query the previous index and may break any active workloads that reference the previous index.
## Indexing policies and TTL
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
To get started with intra-account offline container copy for NoSQL and Cassandra
### API for MongoDB
-To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline container copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription.
+To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline collection copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription.
<a name="how-to-do-container-copy"></a>
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Today's applications are required to be highly responsive and always online. To
Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+Use Retrieval Augmented Generation (RAG) to bring the most semantically relevant data to enrich your AI-powered applications built with Azure OpenAI models like GPT-3.5 and GPT-4. For more information, see [Retrieval Augmented Generation (RAG) with Azure Cosmos DB](rag-data-openai.md).
+ App development is faster and more productive thanks to: - Turnkey multi region data distribution anywhere in the world - Open source APIs - SDKs for popular languages.
+- Retrieval Augmented Generation that brings your data to Azure OpenAI to
As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
End-to-end database management, with serverless and automatic scaling matching y
## Solutions that benefit from Azure Cosmos DB
-[Web, mobile, gaming, and IoT application](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for various data will benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
+[Web, mobile, gaming, and IoT applications](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
## Next steps
cosmos-db Migrate With Mongo Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migrate-with-mongo-tools.md
+
+ Title: Migrate MongoDB offline to Azure Cosmos DB for MongoDB vCore, using MongoDB native tools
+description: Learn how MongoDB native tools can be used to migrate small datasets from MongoDB instances to Azure Cosmos DB for MongoDB vCore
+++++++ Last updated : 08/26/2021++
+# Migrate MongoDB to Azure Cosmos DB for MongoDB vCore offline using MongoDB native tools
++
+You can use MongoDB native tools to perform an offline (one-time) migration of databases from an on-premises or cloud instance of MongoDB to Azure Cosmos DB for MongoDB vCore.
+
+In this guide, you migrate a dataset in MongoDB hosted in an Azure Virtual Machine to Azure Cosmos DB for MongoDB vCore by using MongoDB native tools. The MongoDB native tools are a set of binaries that facilitate data manipulation on an existing MongoDB instance. The focus of this doc is on migrating data out of a MongoDB instance using *mongoexport/mongoimport* or *mongodump/mongorestore*. Since the native tools connect to MongoDB using connection strings, you can run the tools anywhere, however we recommend running these tools within the same network as the MongoDB instance to avoid firewall issues.
+
+The MongoDB native tools can move data only as fast as the host hardware allows; the native tools can be the simplest solution for small datasets where total migration time isn't a concern. [MongoDB Spark connector](https://docs.mongodb.com/spark-connector/current/), or [Azure Data Factory (ADF)](../../../data-factory/connector-azure-cosmos-db-mongodb-api.md) can be better alternatives if you need a scalable migration pipeline.
++
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* [Complete the pre-migration](../pre-migration-steps.md) to create a list of incompatibilities and warnings, if any.
+* [Create an Azure Cosmos DB for MongoDB vCore account](./quickstart-portal.md#create-a-cluster).
+ * [Collect the Azure Cosmos DB for MongoDB vCore credentials](./quickstart-portal.md#get-cluster-credentials)
+ * [Configure Firewall Settings on Azure Cosmos DB for MongoDB vCore](./security.md#network-security-options)
+* Log in to your MongoDB instance
+ * **Ensure that your MongoDB native tools version matches your existing MongoDB instance.**
+ * If your MongoDB instance has a different version than Azure Cosmos DB for MongoDB vCore, then **install both MongoDB native tool versions and use the appropriate tool version for MongoDB and Azure Cosmos DB for MongoDB vCore, respectively.**
+ * Add a user with `readWrite` permissions, unless one already exists. Later in this tutorial, provide this username/password to the *mongoexport* and *mongodump* tools.
+* [Download and install the MongoDB native tools from this link](https://www.mongodb.com/try/download/database-tools).
+
+## Choose the proper MongoDB native tool
+
+![Table for selecting the best MongoDB native tool.](./media/tutorial-mongotools-cosmos-db/mongodb-native-tool-selection-table.png)
+
+* *mongoexport/mongoimport* is the best pair of migration tools for migrating a subset of your MongoDB database.
+ * *mongoexport* exports your existing data to a human-readable JSON or CSV file. *mongoexport* takes an argument specifying the subset of your existing data to export.
+ * *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB for MongoDB vCore in this case.).
+ * JSON and CSV aren't a compact format; you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB for MongoDB vCore.
+* *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into Azure Cosmos DB for MongoDB vCore.
+ * *mongodump* exports your existing data as a BSON file.
+ * *mongorestore* imports your BSON file dump into Azure Cosmos DB for MongoDB vCore.
+* As an aside - if you simply have a small JSON file that you want to import into Azure Cosmos DB for MongoDB vCore, the *mongoimport* tool is a quick solution for ingesting the data.
+++
+## Perform the migration
+
+Choose which database(s) and collection(s) you would like to migrate. In this example, we're migrating the *samples_friends* collection in the *Samples* database from MongoDB to Azure Cosmos DB for MongoDB vCore.
+
+The rest of this section guides you through using the pair of tools you selected in the previous section.
+
+### *mongoexport/mongoimport*
+
+1. To export the data from the source MongoDB instance, open a terminal on the MongoDB instance machine. If it's a Linux machine, type
+
+ ```bash
+ mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db Samples --collection samples_friends --out Samples.json
+ ```
+
+ On windows, the executable is `mongoexport.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the properties of your existing MongoDB database instance.
+
+ You may also choose to export only a subset of the MongoDB dataset by adding an additional filter argument:
+
+ ```bash
+ mongoexport --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db Samples --collection samples_friends --out Samples.json --query '{"field1":"value1"}'
+ ```
+
+ Only documents that match the filter `{"field1":"value1"}` are exported.
+
+ Once you execute the call, you should see that an `Samples.json` file is produced:
++
+1. You can use the same terminal to import `Samples.json` into Azure Cosmos DB for MongoDB vCore. If you're running `mongoimport` on a Linux machine, type
+
+ ```bash
+ mongoimport --host HOST:PORT -u USERNAME -p PASSWORD --db Samples --collection importedQuery --ssl --type json --writeConcern="{w:0}" --file Samples.json
+ ```
+
+ On Windows, the executable is `mongoimport.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the Azure Cosmos DB for MongoDB vCore credentials you collected earlier.
+1. **Monitor** the terminal output from *mongoimport*. You should see that it prints lines of text to the terminal containing updates on the import status:
++
+### *mongodump/mongorestore*
+
+1. To create a BSON data dump of your MongoDB instance, open a terminal on the MongoDB instance machine. If it's a Linux machine, type
+
+ ```bash
+ mongodump --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db Samples --collection samples_friends --out Samples-dump
+ ```
+
+ *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the properties of your existing MongoDB database instance. You should see that an `Samples-dump` directory is produced and that the directory structure of `Samples-dump` reproduces the resource hierarchy (database and collection structure) of your source MongoDB instance. Each collection is represented by a BSON file:
+
+1. You can use the same terminal to restore the contents of `Samples-dump` into Azure Cosmos DB for MongoDB vCore. If you're running `mongorestore` on a Linux machine, type
+
+ ```bash
+ mongorestore --host HOST:PORT --authenticationDatabase admin -u USERNAME -p PASSWORD --db Samples --collection importedQuery --writeConcern="{w:0}" --ssl Samples-dump/Samples/samples_friends.bson
+ ```
+
+ On Windows, the executable is `mongorestore.exe`. *HOST*, *PORT*, *USERNAME*, and *PASSWORD* should be filled in based on the Azure Cosmos DB for MongoDB vCore credentials you collected earlier.
+1. **Monitor** the terminal output from *mongorestore*. You should see that it prints lines to the terminal updating on the migration status:
++
+## Next steps
+
+- Read more about [feature compatibility with MongoDB](compatibility.md).
+- Get started by [creating an account](quickstart-portal.md).
+
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
This document describes the various options to lift and shift your MongoDB workl
- *mongoexport* takes an argument specifying the subset of your existing data to export. - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB in this case.). - Since JSON and CSV aren't compact formats, you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB.-- Here's how you can [migrate data to Azure Cosmos DB for MongoDB vCore using the native MongoDB tools](../tutorial-mongotools-cosmos-db.md#perform-the-migration).
+- Here's how you can [migrate data to Azure Cosmos DB for MongoDB vCore using the native MongoDB tools](./migrate-with-mongo-tools.md).
## Data migration using Azure Databricks (Offline/Online)
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/troubleshoot-common-issues.md
+
+ Title: Troubleshoot common errors in Azure Cosmos DB for MongoDB vCore
+description: This doc discusses the ways to troubleshoot common issues encountered in Azure Cosmos DB for MongoDB vCore.
+++ Last updated : 08/11/2023++++
+# Troubleshoot common issues in Azure Cosmos DB for MongoDB vCore
+
+This guide is tailored to assist you in resolving issues you may encounter when using Azure Cosmos DB for MongoDB vCore. The guide provides solutions for connectivity problems, error scenarios, and optimization challenges, offering practical insights to improve your experience.
+
+>[!Note]
+> Please note that these solutions are general guidelines and may require specific configurations based on individual situations. Always refer to official documentation and support resources for the most accurate and up-to-date information.
+
+## Common errors and solutions
+
+### Unable to Connect to Azure Cosmos DB for MongoDB vCore - Timeout error
+This issue might occur when the cluster does not have the correct firewall rule(s) enabled. If you're trying to access the cluster from a non-Azure IP range, you need to add extra firewall rules. Refer to [Security options and features - Azure Cosmos DB for MongoDB vCore](./security.md#network-security-options) for detailed steps. Firewall rules can be configured in the portal's Networking setting for the cluster. Options include adding a known IP address/range or enabling public IP access.
+++
+### Unable to Connect with DNSClient.DnsResponseException Error
+#### Debugging Connectivity Issues:
+Windows User: <br>
+Psping doesn't work. The use of nslookup confirms cluster reachability and discoverability, indicating network issues are unlikely.
+
+Unix Users: <br>
+For Socket/Network-related exceptions, potential network connectivity issues might be hindering the application from establishing a connection with the Azure Cosmos DB Mongo API endpoint.
+
+To check connectivity, follow these steps:
+```
+nc -v <accountName>.documents.azure.com 10250
+```
+If TCP connect to port 10250/10255 fails, an environment firewall may be blocking the Azure Cosmos DB connection. Kindly scroll down to the page's bottom to submit a support ticket.
+++
+#### Verify your connection string:
+Only use the connection string provided in the portal. Avoid using any variations. Particularly, the connection string using mongodb+srv:// protocol and the c. prefixes aren't recommended. If the issue persists, share application/client-side driver logs for debugging connectivity issues with the team by submitting a support ticket.
++
+## Next steps
+If you've completed all the troubleshooting steps and haven't been able to discover a solution for your issue, kindly consider submitting a [Support Ticket](https://azure.microsoft.com/support/create-ticket/).
+
cosmos-db Tutorial Vector Search In Ai Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/tutorial-vector-search-in-ai-apps.md
+
+ Title: Build AI Apps with Azure Cosmos DB for MongoDB vCore Vector Search
+
+description: Enhance AI-powered Applications with Retrieval Augmented Generation (RAG) using Azure Cosmos DB for MongoDB vCore Vector Search.
++++++ Last updated : 08/22/2023++
+# AI Apps with Azure Cosmos DB for MongoDB vCore Vector Search
++
+## Introduction
+
+Large Language Models (LLMs) available in Azure OpenAI are potent tools that can elevate the capabilities of your AI-driven applications. To fully unleash the potential of LLMs, giving them access to timely and relevant data from your application's data store is crucial. This process, known as Retrieval Augmented Generation (RAG), can be seamlessly accomplished using Azure Cosmos DB. In this tutorial, we delve into the core concepts of RAG and provide links to tutorials and sample code that exemplify powerful RAG strategies using Azure Cosmos DB for MongoDB vCore vector search.
+
+Retrieval Augmented Generation (RAG) elevates AI-powered applications by incorporating external knowledge and data into model inputs. With Azure Cosmos DB for MongoDB vCore's vector search, this process becomes seamless, ensuring that the most pertinent information is effortlessly integrated into your AI models. By applying the power of [embeddings](../../../ai-services/openai/tutorials/embeddings.md) and vector search, you can provide your AI applications with the context they need to excel. Through the provided tutorials and code samples, you can become proficient in harnessing RAG to create smarter and more context-aware AI solutions.
+
+## Understanding Retrieval Augmented Generation (RAG)
+
+Retrieval Augmented Generation harnesses external knowledge and models to efficiently manage custom data or domain-specific expertise. This involves extracting pertinent information from an external data source and seamlessly integrating it into the model's input through prompt engineering. A robust approach is essential to identify the most pertinent data from the external source within the [token limitations of a request](../../../ai-services/openai/quotas-limits.md). This limitation is elegantly addressed by using embeddings, which convert data into vectors, capturing the semantic essence of the text and enabling context comprehension beyond simple keywords.
+
+## What is vector search?
+
+[Vector search](./vector-search.md) is an approach that enables the discovery of analogous items based on shared data characteristics, deviating from the necessity for precise matches within a property field. This method proves invaluable in various applications like text similarity searches, image association, recommendation systems, and anomaly detection. Its functionality revolves around the utilization of vector representations (sequences of numerical values) generated from your data via machine learning models or embeddings APIs. Examples of such APIs encompass [Azure OpenAI Embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). The technique gauges the disparity between your query vector and the data vectors. The data vectors that exhibit the closest proximity to your query vector are identified as semantically akin.
++
+## Utilizing Vector Search with Azure Cosmos DB for MongoDB vCore
+
+RAG's power is truly harnessed through the native vector search capability within Azure Cosmos DB for MongoDB vCore. This enables a seamless fusion of AI-focused applications with stored data in Azure Cosmos DB. Vector search optimally stores, indexes, and searches high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore alongside other application data. This eliminates the need to migrate data to costlier alternatives for vector search functionality.
+
+## Code samples and tutorials
+
+- [**.NET Retail Chatbot Demo**](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/mongovcorev2): Learn how to build a chatbot using .NET that demonstrates RAG's potential in a retail context.
+- [**.NET Tutorial - Recipe Chatbot**](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore): Walk through creating a recipe chatbot using .NET, showcasing RAG's application in a culinary scenario.
+- [**Python Notebook Tutorial**](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore) - Azure Product Chatbot: Explore a Python notebook tutorial that guides you through constructing an Azure product chatbot, highlighting RAG's benefits.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Introduction to Azure Cosmos DB for MongoDB vCore](introduction.md)
+
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md
Last updated 06/11/2019 -+ # Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
For certain scenarios, the effects of a delete by partition key operation isn't
- Aggregate queries that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete. - Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.-- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup.
+- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup.
+
+### Limitations
+- [Hierarchical partition keys](../hierarchical-partition-keys.md) deletion is not supported. This feature permits the deletion of items solely based on the last level of partition keys. For example, consider a scenario where a partition key consists of three hierarchical levels: country, state, and city. In this context, the delete by partition keys functionality can be employed effectively by specifying the complete partition key, encompassing all levels, namely country/state/city. Attempting to delete using intermediate partition keys, such as country/state or solely country, will result in an error.
## How to give feedback or report an issue/bug * Email cosmosPkDeleteFeedbk@microsoft.com with questions or feedback.
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
An [indexing policy update](../index-policy.md#modifying-the-indexing-policy) tr
> [!NOTE] > When you update indexing policy, writes to Azure Cosmos DB are uninterrupted. Learn more about [indexing transformations](../index-policy.md#modifying-the-indexing-policy)
+
+> [!IMPORTANT]
+> Removing an index takes affect immediately, whereas adding a new index takes some time as it requires an indexing transformation. When replacing one index with another (for example, replacing a single property index with a composite-index) make sure to add the new index first and then wait for the index transformation to complete **before** you remove the previous index from the indexing policy. Otherwise this will negatively affect your ability to query the previous index and may break any active workloads that reference the previous index.
+ ### Use the Azure portal
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/modeling-data.md
What to do?
## Takeaways
-The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
+The biggest takeaways from this article are to understand that data modeling in a world that's schema-free is as important as ever.
Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily.
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
To learn more about performance using the Java SDK:
* [Performance tips for Azure Cosmos DB Java V4 SDK](performance-tips-java-sdk-v4.md) ::: zone-end+
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There is a way to remove this request and reduce the latency of the single partition query operation. For single partition queries specify the partition key value for the item and pass it as [partition_key](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) argument:
+
+```python
+items = container.query_items(
+ query="SELECT * FROM r where r.city = 'Seattle'",
+ partition_key="Washington"
+ )
+```
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. The [max_item_count](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) allows you to set the maximum number of items to be returned in the enumeration operation.
+
+```python
+items = container.query_items(
+ query="SELECT * FROM r where r.city = 'Seattle'",
+ partition_key="Washington",
+ max_item_count=1000
+ )
+```
+
+## Next steps
+
+To learn more about using the Python SDK for API for NoSQL:
+
+* [Azure Cosmos DB Python SDK for API for NoSQL](sdk-python.md)
+* [Quickstart: Azure Cosmos DB for NoSQL client library for Python](quickstart-python.md)
++
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There is a way to remove this request and reduce the latency of the single partition query operation. For single partition queries scoping a query to a single partition can be accomplished two ways.
+
+Using a parameterized query expression and specifying partition key in query statement. The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`:
+
+```javascript
+// find all items with same categoryId (partitionKey)
+const querySpec = {
+ query: "select * from products p where p.categoryId=@categoryId",
+ parameters: [
+ {
+ name: "@categoryId",
+ value: "Bikes, Touring Bikes"
+ }
+ ]
+};
+
+// Get items
+const { resources } = await container.items.query(querySpec).fetchAll();
+
+for (const item of resources) {
+ console.log(`${item.id}: ${item.name}, ${item.sku}`);
+}
+```
+
+Or specify [partitionKey](/javascript/api/@azure/cosmos/feedoptions#@azure-cosmos-feedoptions-partitionkey) in `FeedOptions` and pass it as argument:
+
+```javascript
+const querySpec = {
+ query: "select * from products p"
+};
+
+const { resources } = await container.items.query(querySpec, { partitionKey: "Bikes, Touring Bikes" }).fetchAll();
+
+for (const item of resources) {
+ console.log(`${item.id}: ${item.name}, ${item.sku}`);
+}
+```
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. The [maxItemCount](/javascript/api/@azure/cosmos/feedoptions#@azure-cosmos-feedoptions-maxitemcount) allows you to set the maximum number of items to be returned in the enumeration operation.
+
+```javascript
+const querySpec = {
+ query: "select * from products p where p.categoryId=@categoryId",
+ parameters: [
+ {
+ name: "@categoryId",
+ value: items[2].categoryId
+ }
+ ]
+};
+
+const { resources } = await container.items.query(querySpec, { maxItemCount: 1000 }).fetchAll();
+
+for (const item of resources) {
+ console.log(`${item.id}: ${item.name}, ${item.sku}`);
+}
+```
+
+## Next steps
+
+To learn more about using the Node.js SDK for API for NoSQL:
+
+* [Azure Cosmos DB Node.js SDK for API for NoSQL](sdk-nodejs.md)
+* [Quickstart - Azure Cosmos DB for NoSQL client library for Node.js](quickstart-nodejs.md)
+
cosmos-db Rag Data Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/rag-data-openai.md
+
+ Title: Use data with Azure OpenAI
+
+description: Use Retrieval Augmented Generation (RAG) and vector search to ground your Azure OpenAI models with data stored in Azure Cosmos DB.
++++ Last updated : 08/16/2023++
+# Use Azure Cosmos DB data with Azure OpenAI
++
+The Large Language Models (LLMs) in Azure OpenAI are incredibly powerful tools that can take your AI-powered applications to the next level. The utility of LLMs can increase significantly when the models can have access to the right data, at the right time, from your application's data store. This process is known as Retrieval Augmented Generation (RAG) and there are many ways to do this today with Azure Cosmos DB.
+
+In this article, we review key concepts for RAG and then provide links to tutorials and sample code that demonstrate some of most powerful RAG patterns using *vector search* to bring the most semantically relevant data to your LLMs. These tutorials can help you become comfortable with using your Azure Cosmos DB data in Azure OpenAI models.
+
+To jump right into tutorials and sample code for RAG patterns with Azure Cosmos DB, use the following links:
+
+| | Description |
+| | |
+| **[Azure Cosmos DB for NoSQL with Azure Cognitive Search](#azure-cosmos-db-for-nosql-and-azure-cognitive-search)**. | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure Cognitive Search. |
+| **[Azure Cosmos DB for Mongo DB vCore](#azure-cosmos-db-for-mongodb-vcore)**. | Featuring native support for vector search, store your application data and vector embeddings together in a single MongoDB-compatible service. |
+| **[Azure Cosmos DB for PostgreSQL](#azure-cosmos-db-for-postgresql)**. | Offering native support vector search, you can store your data and vectors together in a scalable PostgreSQL offering. |
+
+## Key concepts
+
+This section includes key concepts that are critical to implementing RAG with Azure Cosmos DB and Azure OpenAI.
+
+### Retrieval Augmented Generation (RAG)
+
+RAG involves the process of retrieving supplementary data to provide the LLM with the ability to use this data when it generates responses. When presented with a user's question or prompt, RAG aims to select the most pertinent and current domain-specific knowledge from external sources, such as articles or documents. This retrieved information serves as a valuable reference for the model when generating its response. For example, a simple RAG pattern using Azure Cosmos DB for NoSQL could be:
+
+1. Insert data into an Azure Cosmos DB for NoSQL database and collection.
+2. Create embeddings from a data property using an Azure OpenAI Embeddings model
+3. Link the Azure Cosmos DB for NoSQL to Azure Cognitive Search (for vector indexing/search)
+4. Create a vector index over the embeddings properties.
+5. Create a function to perform vector similarity search based on a user prompt.
+6. Perform question answering over the data using an Azure OpenAI Completions model
+
+The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857)
+
+### Prompts and prompt engineering
+
+A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
+
+- **Instructions** provide directives to the LLM
+- **Primary content**: gives information to the LLM for processing
+- **Examples**: help condition the model to a particular task or process
+- **Cues**: direct the LLM's output in the right direction
+- **Supporting content**: represents supplemental information the LLM can use to generate output
+
+The process of creating good prompts for a scenario is called *prompt engineering*. For more information about prompts and best practices for prompt engineering, see [Azure OpenAI Service - Azure OpenAI | Microsoft Learn](../ai-services/openai/concepts/prompt-engineering.md).
+
+### Tokens
+
+Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word `hamburger` would be divided into tokens such as `ham`, `bur`, and `ger` while a short and common word like `pear` would be considered a single token.
+
+In Azure OpenAI, input text provided to the API is turned into tokens (tokenized). The number of tokens processed in each API request depends on factors such as the length of the input, output, and request parameters. The quantity of tokens being processed also impacts the response time and throughput of the models. There are limits to the amount tokens each model can take in a single request/response from Azure OpenAI. [Learn more about Azure OpenAI Service quotas and limits here](../ai-services/openai/quotas-limits.md)
+
+### Vectors
+
+Vectors are ordered arrays of numbers (typically floats) that can represent information about some data. For example, an image can be represented as a vector of pixel values, or a string of text can be represented as a vector or ASCII values. The process for turning data into a vector is called *vectorization*.
+
+### Embeddings
+
+Embeddings are vectors that represent important features of data. Embeddings are often learned by using a deep learning model, and machine learning and AI models utilized them as features. Embeddings can also capture semantic similarity between similar concepts. For example, in generating an embedding for the words `person` and `human`, we would expect their embeddings (vector representation) to be similar in value since the words are also semantically similar.
+
+ Azure OpenAI features models for creating embeddings from text data. The service breaks text out into tokens and generates embeddings using models pretrained by OpenAI. [Learn more about creating embeddings with Azure OpenAI here.](../ai-services/openai//concepts/understand-embeddings.md)
+
+### Vector search
+
+Vector search refers to the process of finding all vectors in a dataset that are semantically similar to a specific query vector. Therefore, a query vector for the word `human`, and I search the entire dictionary for semantically similar words, I would expect to find the word `person` as a close match. This closeness, or distance, is measured using a similarity metric such as cosine similarity. The more similar the vectors are, the smaller the distance between them.
+
+Consider a scenario where you have a query over millions of document and you want to find the most similar document in your data. You can create embeddings for your data and the query document using Azure OpenAI. Then, you can perform a vector search to find the most similar documents from your dataset. However, performing a vector search across a few examples is trivial. Performing this same search across thousands or millions of data points becomes challenging. There are also trade-offs between exhaustive search and approximate nearest neighbor (ANN) search methods including latency, throughput, accuracy, and cost, all of which can depend on the requirements of your application.
+
+Adding Azure Cosmos DB vector search capabilities to Azure OpenAI Service enables you to store long term memory and chat history to improve your Large Language Model (LLM) solution. Vector search allows you to efficiently query back the most relevant context to personalize Azure OpenAI prompts in a token-efficient manner. Storing vector embeddings alongside the data in an integrated solution minimizes the need to manage data synchronization and accelerates your time-to-market for AI app development.
+
+## Using Azure Cosmos DB data with Azure OpenAI
+
+The RAG pattern harnesses external knowledge and models to effectively handle custom data or domain-specific knowledge. It involves extracting pertinent information from an external data source and integrating it into the model request through prompt engineering.
+
+A robust mechanism is necessary to identify the most relevant data from the external source that can be passed to the model considering the limitation of a restricted number of tokens per request. This limitation is where embeddings play a crucial role. By converting the data in our database into embeddings and storing them as vectors for future use, we apply the advantage of capturing the semantic meaning of the text, going beyond mere keywords to comprehend the context.
+
+Prior to sending a request to Azure OpenAI, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the model request using prompt engineering.
+
+## Azure Cosmos DB for NoSQL and Azure Cognitive Search
+
+Implement RAG-patterns with Azure Cosmos DB for NoSQL and Azure Cognitive Search. This approach enables powerful integration of your data residing in Azure Cosmos DB for NoSQL into your AI-oriented applications. Azure Cognitive Search empowers you to efficiently index, and query high-dimensional vector data, which is stored in Azure Cosmos DB for NoSQL.
+
+### Code samples
+
+- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/cognitive-search-vector-v2)
+- [.NET samples - Hackathon project](https://github.com/AzureCosmosDB/OpenAIHackathon)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch)
+- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel)
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch)
+
+## Azure Cosmos DB for MongoDB vCore
+
+RAG can be applied using the native vector search feature in Azure Cosmos DB for MongoDB vCore, facilitating a smooth merger of your AI-centric applications with your stored data in Azure Cosmos DB. The use of vector search offers an efficient way to store, index, and search high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore alongside other application data. This approach removes the necessity of migrating your data to costlier alternatives for vector search.
+
+### Code samples
+
+- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/mongovcorev2)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)
+
+## Azure Cosmos DB for PostgreSQL
+
+You can employ RAG by utilizing native vector search within Azure Cosmos DB for PostgreSQL. This strategy provides a seamless integration of your AI-driven applications, including the ones developed using Azure OpenAI embeddings, with your data housed in Azure Cosmos DB. By taking advantage of vector search, you can effectively store, index, and execute queries on high-dimensional vector data directly within Azure Cosmos DB for PostgreSQL along with the rest of your data.
+
+### Code samples
+
+- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch)
+
+## Next steps
+
+- [Vector search with Azure Cognitive Search](../search/vector-search-overview.md)
+- [Vector search with Azure Cosmos DB for MongoDB vCore(mongodb/vcore/vector-search.md)
+- [Vector search with Azure Cosmos DB PostgreSQL](postgresql/howto-use-pgvector.md)
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
The following tables show data that's included or isn't in Cost Management. All
| **Included** | **Not included** | | | |
-| Azure service usage⁵ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace offering usage⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace purchases⁶ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Reservation purchases⁷ | |
+| Azure service usage (including deleted resources)⁵ | Unbilled services (e.g., free tier resources) |
+| Marketplace offering usage⁶ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace purchases⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Reservation purchases⁷ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
| Amortization of reservation purchases⁷ | | | New Commerce non-Azure products (Microsoft 365 and Dynamics 365) ⁸ | |
_⁷ Reservation purchases are only available for Enterprise Agreement (EA) and
_⁸ Only available for specific offers._
+Please note Cost Management data only includes the usage and purchases from services and resources that are actively running. Cost data is historical and will include resources, resource groups, and subscriptions that have been stopped, deleted, or cancelled and may not reflect the same resources, resource groups, and subscriptions you see in other tools, like Azure Resource Manager or Azure Resource Graph, which only show the current resources that are deployed in your subscriptions. Not all resources emit usage and therefore may not be represented in the cost data. Similarly, some resources are not tracked by Azure Resource Manager and may not be represented in subscription resources.
+ ## How tags are used in cost and usage data Cost Management receives tags as part of each usage record submitted by the individual services. The following constraints apply to these tags:
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
If your default payment method is wire transfer, check your invoice for payment
> - [Bulgaria](/legal/pay/bulgaria) > - [Cameroon](/legal/pay/cameroon) > - [Canada](/legal/pay/canada)
-> - [Cape Verde](/legal/pay/cape-verde)
+> - [Cabo Verde](/legal/pay/cape-verde)
> - [Cayman Islands](/legal/pay/cayman-islands) > - [Chile](/legal/pay/chile) > - [China (PRC)](/legal/pay/china-prc)
If your default payment method is wire transfer, check your invoice for payment
> - [Lithuania](/legal/pay/lithuania) > - [Luxembourg](/legal/pay/luxembourg) > - [Macao Special Administrative Region](/legal/pay/macao)
-> - [Macedonia, Former Yugoslav Republic of](/legal/pay/macedonia)
> - [Malaysia](/legal/pay/malaysia) > - [Malta](/legal/pay/malta) > - [Mauritius](/legal/pay/mauritius)
If your default payment method is wire transfer, check your invoice for payment
> - [New Zealand](/legal/pay/new-zealand) > - [Nicaragua](/legal/pay/nicaragua) > - [Nigeria](/legal/pay/nigeria)
+> - [North Macedonia, Republic of](/legal/pay/macedonia)
> - [Norway](/legal/pay/norway) > - [Oman](/legal/pay/oman) > - [Pakistan](/legal/pay/pakistan)
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
Previously updated : 08/08/2023 Last updated : 08/18/2023 # Change data capture resource overview
The new Change Data Capture resource in ADF allows for full fidelity change data
* JSON * ORC * Parquet
+* Azure Synapse Analytics
## Known limitations * Currently, when creating source/target mappings, each source and target is only allowed to be used once.
The new Change Data Capture resource in ADF allows for full fidelity change data
For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md).
+## Azure Synapse Analytics as Target
+When using Azure Synapse Analytics as target, the **Staging Settings** is available on the main table canvas. Enabling staging is mandatory when selecting Azure Synapse Analytics as the target. This significantly enhances write performance by utilizing performant bulk loading capability such as COPY INTO command. **Staging Settings** can be configured in two ways: utilizing **Factory settings** or opting for a **Custom settings**. **Factory settings** apply at the factory level. For the first time, if these settings aren't configured, you'll be directed to the global staging setting section for configuration. Once set, all CDC top-level resources will adopt this configuration. **Custom settings** is scoped only for the CDC resource for which it is configured and overrides the **Factory settings**.
+
+> [!NOTE]
+> As we utilize the COPY INTO command to transfer data from the staging location to Azure Synapse Analytics, it is advisable to ensure that all required permissions are pre-configured within Azure Synapse Analytics.
++ > [!NOTE] > We always use the last published configuration when starting a CDC. For running CDCs, while your data is being processed, you will be billed 4 v-cores of General Purpose Data Flows. ## Next steps - [Learn how to set up a change data capture resource](how-to-change-data-capture-resource.md).
+- [Learn how to set up a change data capture resource with schema evolution](how-to-change-data-capture-resource-with-schema-evolution.md).
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md
To use a Set Variable activity in a pipeline, complete the following steps:
## Setting a pipeline return value with UI
-We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_. This allows communication from the child pipeline to the calling pipeline, in the following scenario.
+We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_, allowing communication from the child pipeline to the calling pipeline, in the following scenario.
You don't need to define the variable, before using it. For more information, see [Pipeline Return Value](tutorial-pipeline-return-value.md)
value | String literal or expression object value that the variable is assigned
## Incrementing a variable
-A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable.
+A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field, that is, no self-referencing. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable. Here's an example of this pattern:
-Below is an example of this pattern:
+* First you define two variables: one for the iterator, and one for temporary storage.
++
+* Then you use two activities to increment values
:::image type="content" source="media/control-flow-set-variable-activity/increment-variable.png" alt-text="Screenshot shows increment variable."::: ``` json {
- "name": "pipeline3",
+ "name": "pipeline1",
"properties": { "activities": [ {
- "name": "Set I",
+ "name": "Increment J",
"type": "SetVariable",
- "dependsOn": [
- {
- "activity": "Increment J",
- "dependencyConditions": [
- "Succeeded"
- ]
- }
- ],
+ "dependsOn": [],
+ "policy": {
+ "secureOutput": false,
+ "secureInput": false
+ },
"userProperties": [], "typeProperties": {
- "variableName": "i",
+ "variableName": "temp_j",
"value": {
- "value": "@variables('j')",
+ "value": "@add(variables('counter_i'),1)",
"type": "Expression" } } }, {
- "name": "Increment J",
+ "name": "Set I",
"type": "SetVariable",
- "dependsOn": [],
+ "dependsOn": [
+ {
+ "activity": "Increment J",
+ "dependencyConditions": [
+ "Succeeded"
+ ]
+ }
+ ],
+ "policy": {
+ "secureOutput": false,
+ "secureInput": false
+ },
"userProperties": [], "typeProperties": {
- "variableName": "j",
+ "variableName": "counter_i",
"value": {
- "value": "@string(add(int(variables('i')), 1))",
+ "value": "@variables('temp_j')",
"type": "Expression" } } } ], "variables": {
- "i": {
- "type": "String",
- "defaultValue": "0"
+ "counter_i": {
+ "type": "Integer",
+ "defaultValue": 0
},
- "j": {
- "type": "String",
- "defaultValue": "0"
+ "temp_j": {
+ "type": "Integer",
+ "defaultValue": 0
} }, "annotations": []
Below is an example of this pattern:
} ```
-Variables are currently scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
+Variables are scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
## Next steps
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
ms.devlang: powershell
-+ Last updated 07/17/2023
data-lake-store Data Lake Store Secure Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-secure-data.md
description: Learn how to secure data in Azure Data Lake Storage Gen1 using grou
+ Last updated 03/26/2018 - # Securing data stored in Azure Data Lake Storage Gen1 Securing data in Azure Data Lake Storage Gen1 is a three-step approach. Both Azure role-based access control (Azure RBAC) and access control lists (ACLs) must be set to fully enable access to data for users and security groups.
databox-online Azure Stack Edge Gpu 2304 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2304-release-notes.md
+
+ Title: Azure Stack Edge 2304 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2304 release.
++
+
+++ Last updated : 08/21/2023+++
+# Azure Stack Edge 2304 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2304 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2304** release, which maps to software version **2.2.2257.1193**.
+
+## Supported update paths
+
+This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318).
+
+You can update to the latest version using the following update paths:
+
+| Current version | Update to | Then apply |
+| --| --| --|
+|2205 and earlier |2207 |2304
+|2207 and later |2304 |
+
+## What's new
+
+The 2304 release has the following new features and enhancements:
+
+- **Fix for the Arc connectivity issue** - In the 2303 release, there was an issue with Arc agent where it couldn't connect to the Azure Stack Edge Kubernetes cluster. Owing to this issue, you weren't able to manage the Kubernetes cluster via Arc.
+
+ The 2304 release fixes the connectivity issue. To manage your Azure Stack Edge Kubernetes cluster via Arc, update to this release.
+- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).
+
+## Issues fixed in this release
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Fix for the Arc connectivity issue |In the 2303 release, there was an issue with Arc agent where it couldn't connect to the Azure Stack Edge Kubernetes cluster. Owing to this issue, you weren't able to manage the Kubernetes cluster via Arc. <BR> The 2304 release fixes the connectivity issue. To manage your Azure Stack Edge Kubernetes cluster via Arc, update to this release. |
+
+<!--## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Need known issues in 2303 |-->
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2303 release, there is an additional nodepool rollout. |The update may take longer. |
+|**28.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. |
+|**29.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**30.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 03/30/2023 Last updated : 08/21/2023 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest updates
-The current update is Update 2303. This update installs two updates, the device update followed by Kubernetes updates.
+The current update is Update 2304. This update installs two updates, the device update followed by Kubernetes updates.
The associated versions for this update are: -- Device software version: Azure Stack Edge 2303 (2.2.2257.1113)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2303 (2.2.2257.1113)
+- Device software version: Azure Stack Edge 2304 (2.2.2257.1193)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2304 (2.2.2257.1193)
- Kubernetes server version: v1.24.6 - IoT Edge version: 0.1.0-beta15-- Azure Arc version: 1.8.14
+- Azure Arc version: 1.10.6
- GPU driver version: 515.65.01 - CUDA version: 11.7
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2303-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2304-release-notes.md).
-**To apply 2303 update, your device must be running version 2207 or later.**
+**To apply 2304 update, your device must be running version 2207 or later.**
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2303.
+- You can update to 2207 from 2106 or later, and then install 2304.
### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer.
-If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2303.
+If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2304.
-Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2303:
+Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2304:
-1. Update your device version to 2303.
+1. Update your device version to 2304.
1. Update your Kubernetes version to 2210.
-1. Update your Kubernetes version to 2303.
+1. Update your Kubernetes version to 2304.
-If you are running 2210, you can update both your device version and Kubernetes version directly to 2303.
+If you are running 2210, you can update both your device version and Kubernetes version directly to 2304.
-In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2303.
+In Azure portal, the process will require two clicks, the first update gets your device version to 2304 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2304.
-From the local UI, you will have to run each update separately: update the device version to 2303, then update Kubernetes version to 2210, and then update Kubernetes version to 2303.
+From the local UI, you will have to run each update separately: update the device version to 2304, then update Kubernetes version to 2210, and then update Kubernetes version to 2304.
### Updates for a single-node vs two-node
databox-online Azure Stack Edge Technical Specifications Power Cords Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-power-cords-regional.md
Use the following table to find the correct cord specifications for your region:
|China|250|10|RVV300/500 3X0.75|GB 2099.1|C13|2000| |Colombia|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Costa Rica|125|10|SVE 18/3|NEMA 5-15P|C13|1830|
-|C├┤te D'Ivoire (Ivory Coast)|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
+|C├┤te D'Ivoire|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Croatia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Cyprus|250|5|H05VV-F 3x0.75|BS1363 SS145/A|C13|1800| |Czech Republic|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
Use the following table to find the correct cord specifications for your region:
|Lithuania|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Luxembourg|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Macao Special Administrative Region|2250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
-|Macedonia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Malaysia|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Malta|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Mauritius|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
Use the following table to find the correct cord specifications for your region:
|New Zealand|250|10|H05VV-F 3x1.00|AS/NZS 3112|C13|2438| |Nicaragua|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Nigeria|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
+|North Macedonia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830|
|Norway|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Oman|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Pakistan|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|
defender-for-cloud Advanced Configurations For Malware Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/advanced-configurations-for-malware-scanning.md
Title: Microsoft Defender for Storage - advanced configurations for malware scanning description: Learn about the advanced configurations of Microsoft Defender for Storage malware scanning Previously updated : 08/08/2023 Last updated : 08/21/2023
For each storage account enabled with malware scanning, you can configure to sen
1. To configure the Event Grid custom topic destination, go to the relevant storage account, open the **Microsoft Defender for Cloud** tab, and select the settings to configure. > [!NOTE]
-> When you set an Event Grid custom topic, you should set **Override Defender for Storage subscription-level settingsΓÇ¥ to **On** to make sure it overrides the subscription-level settings.
+> When you set an Event Grid custom topic, you should set **Override Defender for Storage subscription-level settings** to **On** to make sure it overrides the subscription-level settings.
:::image type="content" source="media/azure-defender-storage-configure/event-grid-settings.png" alt-text="Screenshot that shows where to enable an Event Grid destination for scan logs." lightbox="media/azure-defender-storage-configure/event-grid-settings.png":::
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
| [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).-- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).
+- **Query vulnerability information via subassessment API** - You can get scan results via [REST API](subassessment-rest-api.md).
- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). - **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md).
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Title: Reference list of attack paths and cloud security graph components
description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 04/13/2023 Last updated : 08/15/2023 # Reference list of attack paths and cloud security graph components
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account |
-| Internet expsed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
+| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
### AWS EC2 instances
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource | | Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both |
+### GCP VM Instances
+
+| Attack path display name | Attack path description |
+|--|--|
+| Internet exposed VM instance has high severity vulnerabilities | GCP VM instance '[VMInstanceName]' is reachable from the internet and has high severity vulnerabilities [Remote Code Execution]. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities allowing remote code execution on the machine and assigned with Service Account with read permission to GCP Storage bucket '[BucketName]' containing sensitive data. |
+| Internet exposed VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
+| Internet exposed VM instance has high severity vulnerabilities and a hosted database installed | GCP VM instance '[VMInstanceName]' with a hosted [DatabaseType] database is reachable from the internet and has high severity vulnerabilities. |
+| Internet exposed VM with high severity vulnerabilities has plaintext SSH private key | GCP VM instance '[MachineName]' is reachable from the internet, has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
+| VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
+| VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities [Remote Code Execution] and has read permissions to GCP Storage bucket '[BucketName]' containing sensitive data. |
+| VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'.|
+| VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
+| VM instance with high severity vulnerabilities has plaintext SSH private key | GCP VM instance to align with all other attack paths. Virtual machine '[MachineName]' has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
+ ### Azure data | Attack path display name | Attack path description |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket| | RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts |
+### GCP Data
+
+| Attack path display name | Attack path description |
+|--|--|
+| GCP Storage Bucket with sensitive data is publicly accessible | GCP Storage Bucket [BucketName] with sensitive data allows public read access without authorization required. |
+ ### Azure containers Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
This section lists all of the cloud security graph components (connections and i
| Insight | Description | Supported entities | |--|--|--|
-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance |
+| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance, GCP VM instance, GCP SQL admin instance |
| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance |
-| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts |
+| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts, GCP cloud storage bucket |
| Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
-| Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources |
+| Has tags | Lists the resource tags of the cloud resource | All Azure, AWS, and GCP resources |
| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
-| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository |
+| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository, GCP cloud storage bucket |
| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | Azure AD User account, IAM user | | Is external user | Indicates that the user account is outside the organization's domain | Azure AD User account | | Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
This section lists all of the cloud security graph components (connections and i
| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image |
-| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image |
+| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image, GCP VM instance |
+| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image, GCP VM instance |
| Public IP metadata | Lists the metadata of an Public IP | Public IP | | Identity metadata | Lists the metadata of an identity | Azure AD Identity |
This section lists all of the cloud security graph components (connections and i
|--|--|--|--| | Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Azure AD managed identity | | Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database |
-| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
+| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server, GCP project, GCP Folder, GCP Organization | All Azure, AWS, and GCP resources, All Kubernetes entities, All DevOps entities, Azure SQL database |
+| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service, GCP VM instance, GCP instance group |
| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod | | Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group | | Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Previously updated : 06/29/2023 Last updated : 08/15/2023
Agentless scanning for VMs provides vulnerability assessment and software invent
|Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)| | Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
| Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
-| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
-| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK |
+| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) |
+| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Google-managed encryption key<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Customer-managed encryption key (CMEK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Customer-supplied encryption key (CSEK) |
## How agentless scanning for VMs works
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 06/20/2023 Last updated : 08/10/2023 # Cloud Security Posture Management (CSPM)
Microsoft Defender CSPM protects across all your multicloud workloads, but billi
> > - The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1, 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï >
-> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscription that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscriptions that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Plan availability
The following table summarizes each plan and their cloud availability.
| [Data exporting](export-to-siem.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Workflow automation](workflow-automation.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Tools for remediation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure | | [Container registries vulnerability assessment](concept-agentless-containers.md), including registry scanning | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
-| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
> [!NOTE] > If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors.
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
The table summarizes support for data-aware posture management.
| | | |What Azure data resources can I discover? | [Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).| |What AWS data resources can I discover? | AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.|
-|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).|
+|What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region |
+|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).<br/><br/>GCP storage buckets: Google account permission to run script (to create a role).|
|What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.| |What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Discovery is done locally in the region.| |What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region.|
+|What GCP regions are supported? | europe-west1, us-east1, us-west1, us-central1, us-east4, asia-south1, northamerica-northeast1|
|Do I need to install an agent? | No, discovery is agentless.| |What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt include other costs except for the respective plan costs.| |What permissions do I need to view/edit data sensitivity settings? | You need one of these Azure Active directory roles: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.|
+| What permissions do I need to perform onboarding? | You need one of these Azure Active directory roles: Security Admin, Contributor, Owner on the subscription level (where the GCP project/s reside in). For consuming the security findings: Security Reader, Security Admin,Reader, Contributor, Owner on the subscription level (where the GCP project/s reside). |
## Configuring data sensitivity settings
Defender for Cloud starts discovering data immediately after enabling a plan, or
- It takes up to 24 hours to see the results for a first-time discovery. - After files are updated in the discovered resources, data is refreshed within eight days. - A new Azure storage account that's added to an already discovered subscription is discovered within 24 hours or less.-- A new AWS S3 bucket that's added to an already discovered AWS account is discovered within 48 hours or less.
+- A new AWS S3 bucket or GCP storage bucket that's added to an already discovered AWS account or Google account is discovered within 48 hours or less.
### Discovering AWS S3 buckets
In order to protect AWS resources in Defender for Cloud, you set up an AWS conne
- To connect AWS accounts, you need Administrator permissions on the account. - The role allows these permissions: S3 read only; KMS decrypt.
+### Discovering GCP storage buckets
+
+In order to protect GCP resources in Defender for Cloud, you can set up a Google connector using a script template to onboard the GCP account.
+
+- To discover GCP storage buckets, Defender for Cloud updates the script template.
+- The script template creates a new role in the Google account to allow permission for the Defender for Cloud scanner to access data in the GCP storage buckets.
+- To connect Google accounts, you need Administrator permissions on the account.
+ ## Exposed to the internet/allows public access Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details.
-**State** | **Azure storage accounts** | **AWS S3 Buckets**
- | |
-**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses.
-**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**.
-
+**State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** |
+ | | |
+**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. |
+**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£.
## Next steps
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
Microsoft Defender for DNS doesn't use any agents.
In this article, you learned about Microsoft Defender for DNS.
-To protect your DNS layer, enable Microsoft Defender for DNS for each of your subscriptions as described in [Enable enhanced protections](enable-enhanced-security.md).
-
-> [!div class="nextstepaction"]
-> [Enable enhanced protections](enable-enhanced-security.md)
- For related material, see the following article: -- Security alerts might be generated by Defender for Cloud or received from other security products. To export all of these alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
+Security alerts might be generated by Defender for Cloud or received from other security products. To export all of these alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
+
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 06/15/2023 Last updated : 08/21/2023
Defender for Storage includes:
## Getting started
-With a simple agentless setup at scale, you can [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.
+With a simple agentless setup at scale, you can [enable Defender for Storage](tutorial-enable-storage-plan.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.
> [!NOTE] > If you already have the Defender for Storage (classic) enabled and want to access the new security features and pricing, you'll need to [migrate to the new pricing plan](defender-for-storage-classic-migrate.md).
In summary, Malware Scanning, which is only available on the new plan for Blob s
In this article, you learned about Microsoft Defender for Storage. -- [Enable Defender for Storage](enable-enhanced-security.md)
+- [Enable Defender for Storage](tutorial-enable-storage-plan.md)
- Check out [common questions](faq-defender-for-storage.yml) about Defender for Storage.
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Title: Malware scanning in Microsoft Defender for Storage description: Learn about the benefits and features of malware scanning in Microsoft Defender for Storage. Previously updated : 08/15/2023 Last updated : 08/21/2023
Some common use-cases and scenarios for malware scanning in Defender for Storage
To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
-You can [enable and configure Malware Scanning at scale](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription) for your subscriptions while maintaining granular control over configuring the feature for individual storage accounts. There are several ways to enable and configure Malware Scanning: [Azure built-in policy](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-at-scale-with-an-azure-built-in-policy) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#bicep-template) and [ARM template](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#arm-template), using the [Azure portal](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal), or directly with [REST API](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-with-rest-api).
+You can [enable and configure Malware Scanning at scale](tutorial-enable-storage-plan.md) for your subscriptions while maintaining granular control over configuring the feature for individual storage accounts. There are several ways to enable and configure Malware Scanning: [Azure built-in policy](defender-for-storage-policy-enablement.md) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription&branch=pr-en-us-248836#bicep-template) and [ARM template](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#azure-resource-manager-template), using the [Azure portal](defender-for-storage-azure-portal-enablement.md?tabs=enable-subscription), or directly with [REST API](defender-for-storage-rest-api-enablement.md?tabs=enable-subscription).
To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
defender-for-cloud Defender For Storage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md
# Testing the Defender for Storage data security features
-After you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md), you can test the service and run a proof of concept to familiarize yourself with its features and validate the advanced security capabilities effectively protect your storage accounts by generating real security alerts. This guide will walk you through testing various aspects of the security coverage offered by Defender for Storage.
+After you [enable Microsoft Defender for Storage](tutorial-enable-storage-plan.md), you can test the service and run a proof of concept to familiarize yourself with its features and validate the advanced security capabilities effectively protect your storage accounts by generating real security alerts. This guide will walk you through testing various aspects of the security coverage offered by Defender for Storage.
There are three main components to test:
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
Previously updated : 06/29/2023 Last updated : 08/15/2023 # Enable agentless scanning for VMs
If you have Defender for Servers P2 already enabled and agentless scanning is tu
After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud.
+## Enable agentless scanning in GCP
+
+1. From Defender for Cloud's menu, select **Environment settings**.
+1. Select the relevant project or organization.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**.
+
+ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-plan.png" alt-text="Screenshot that shows where to select the plan for GCP projects." lightbox="media/enable-agentless-scanning-vms/gcp-select-plan.png":::
+
+1. In the settings pane, turn on ΓÇ»**Agentless scanning**.
+
+ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-agentless.png" alt-text="Screenshot that shows where to select agentless scanning." lightbox="media/enable-agentless-scanning-vms/gcp-select-agentless.png":::
+
+1. SelectΓÇ»**Save and Next: Configure Access**.
+1. Copy the onboarding script.
+1. Run the onboarding script in the GCP organization/project scope (GCP portal or gcloud CLI).
+1. Select ΓÇ»**Next: Review and generate**.
+1. Select ΓÇ»**Update**.
+ ## Exclude machines from scanning Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped.
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
Title: Identify and remediate attack paths
description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 07/10/2023 Last updated : 08/10/2023 # Identify and remediate attack paths
You can check out the full list of [Attack path names and descriptions](attack-p
| Aspect | Details | |--|--|
-| Release state | GA (General Availability) |
+| Release state | GA (General Availability) for Azure, AWS <Br> Preview for GCP |
| Prerequisites | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md), or [Enable Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md). <br> - [Enable Defender CSPM](enable-enhanced-security.md) <br> - Enable agentless container posture extension in Defender CSPM, or [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS, GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
## Features of the attack path overview page
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Title: Build queries with cloud security explorer
description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 08/10/2023 Last updated : 08/16/2023 # Build queries with cloud security explorer
Defender for Cloud's contextual security capabilities assists security teams in
Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
-With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure and AWS).
+With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure AWS, and GCP).
Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
Learn more about [the cloud security graph, attack path analysis, and the cloud
| Release state | GA (General Availability) | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled<br>- Defender for Servers P2 customers can use the explorer UI to query for keys and secrets, but must have Defender CSPM enabled to get the full value of the Explorer. | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds - GCP (Preview) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
## Prerequisites
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Title: Security recommendations for multi-factor authentication description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 06/28/2023 Last updated : 08/14/2023
-# Manage multi-factor authentication (MFA) enforcement on your subscriptions
+# Manage multi-factor authentication (MFA) on your subscriptions
-If you're using passwords, only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO).
+If you're using passwords only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO).
-There are multiple ways to enable MFA for your Azure Active Directory (AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud.
+There are multiple ways to enable MFA for your Azure Active Directory (Azure AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud.
## MFA and Microsoft Defender for Cloud
The recommendations in the Enable MFA control ensure you're meeting the recommen
- Accounts with write permissions on Azure resources should be MFA enabled - Accounts with read permissions on Azure resources should be MFA enabled
-There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, conditional access (CA) policy.
+There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, and conditional access (CA) policy.
### Free option - security defaults
Customers with Microsoft 365 can use **Per-user assignment**. In this scenario,
### MFA for Azure AD Premium customers
-For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you'll need [Azure Active Directory (AD) tenant permissions](../active-directory/roles/permissions-reference.md).
+For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you need [Azure Active Directory (Azure AD) tenant permissions](../active-directory/roles/permissions-reference.md).
Your CA policy must:
Learn more in the [Azure Conditional Access documentation](../active-directory/c
## Identify accounts without multi-factor authentication (MFA) enabled
-You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or using Azure Resource Graph.
+You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or by using the Azure Resource Graph.
### View the accounts without MFA enabled in the Azure portal
To see which accounts don't have MFA enabled, use the following Azure Resource G
1. Open **Azure Resource Graph Explorer**.
- :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Launching Azure Resource Graph Explorer** recommendation page" :::
+ :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Screenshot showing launching the Azure Resource Graph Explorer** recommendation page" lightbox="media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png":::
1. Enter the following query and select **Run query**.
- ```kusto
+ ```
securityresources
- | where type == "microsoft.security/assessments"
- | where properties.displayName contains "Accounts with owner permissions on Azure resources should be MFA enabled"
- | where properties.status.code == "Unhealthy"
+ | where type =~ "microsoft.security/assessments/subassessments"
+ | where id has "assessments/dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c" or id has "assessments/c0cb17b2-0607-48a7-b0e0-903ed22de39b" or id has "assessments/6240402e-f77c-46fa-9060-a7ce53997754"
+ | parse id with start "/assessments/" assessmentId "/subassessments/" userObjectId
+ | summarize make_list(userObjectId) by strcat(tostring(properties.displayName), " (", assessmentId, ")")
+ | project ["Recommendation Name"] = Column1 , ["Account ObjectIDs"] = list_userObjectId
``` 1. The `additionalData` property reveals the list of account object IDs for accounts that don't have MFA enforced. > [!NOTE]
- > The accounts are shown as object IDs rather than account names to protect the privacy of the account holders.
+ > The 'Account ObjectIDs' column contains the list of account object IDs for accounts that don't have MFA enforced per recommendation.
-> [!TIP]
-> Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get).
+ > [!TIP]
+ > Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get).
## Next steps
defender-for-cloud Plan Multicloud Security Determine Multicloud Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md
In Defender for Cloud, you enable specific plans to get Cloud Workload Platform
- [Defender for Containers](./defender-for-containers-introduction.md): Help secure your Kubernetes clusters with security recommendations and hardening, vulnerability assessments, and runtime protection. - [Defender for SQL](./defender-for-sql-usage.md): Protect SQL databases running in AWS and GCP.
-### What agent do I need?
+### What extension do I need?
-The following table summarizes agent requirements for CWPP.
+The following table summarizes extension requirements for CWPP.
-| Agent |Defender for Servers|Defender for Containers|Defender fo SQL on Machines|
+| Extension |Defender for Servers|Defender for Containers|Defender for SQL on Machines|
|::|::|::|::| |Azure Arc Agent | Γ£ö | Γ£ö | Γ£ö |
-|Microsoft Defender for Endpoint extension |Γ£ö|
-|Vulnerability assessment| Γ£ö| |
+|Microsoft Defender for Endpoint extension |Γ£ö|||
+|Vulnerability assessment| Γ£ö| ||
+|Agentless Disk Scanning| Γ£ö | Γ£ö ||
|Log Analytics or Azure Monitor Agent (preview) extension|Γ£ö| |Γ£ö| |Defender agent| | Γ£ö| | |Azure Policy for Kubernetes | | Γ£ö| |
The following components and requirements are needed to receive full protection
- **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.
-To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
+ To autoprovision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
- **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](./integration-defender-for-endpoint.md?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities. - **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md), or the [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) solution. - **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines. #### Check networking requirements
-Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Auto-provisioning is enabled by default.
+Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Autoprovisioning is enabled by default.
### Defender for Containers
To receive the full benefits of Defender for SQL on your multicloud workload, yo
- **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.
- - To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
+ - To autoprovision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent.
- **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines - **Automatic SQL server discovery and registration**: Supports automatic discovery and registration of SQL servers
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
When you enable Defender for Cloud on an Azure subscription, the [Microsoft clou
The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves.
+> [!TIP]
+> Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate. When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard. Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [Multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud).
+ In this tutorial you'll learn how to: > [!div class="checklist"]
-> * Evaluate your regulatory compliance using the regulatory compliance dashboard
-> * Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products
-> * Improve your compliance posture by taking action on recommendations
-> * Download PDF/CSV reports as well as certification reports of your compliance status
-> * Setup alerts on changes to your compliance status
-> * Export your compliance data as a continuous stream and as weekly snapshots
+>
+> - Evaluate your regulatory compliance using the regulatory compliance dashboard
+> - Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products
+> - Improve your compliance posture by taking action on recommendations
+> - Download PDF/CSV reports as well as certification reports of your compliance status
+> - Setup alerts on changes to your compliance status
+> - Export your compliance data as a continuous stream and as weekly snapshots
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
Use the regulatory compliance dashboard to help focus your attention on the gaps
:::image type="content" source="./media/regulatory-compliance-dashboard/compliance-drilldown.png" alt-text="Screenshot that shows the exploration of the details of compliance with a specific standard." lightbox="media/regulatory-compliance-dashboard/compliance-drilldown.png":::
- The following list has a numbered item that matches each location in the image above, and describes what is in the image:
-- Select a compliance standard to see a list of all controls for that standard. (1)
+ The following list has a numbered item that matches each location in the image above, and describes what is in the image:
+
+- Select a compliance standard to see a list of all controls for that standard. (1)
- View the subscription(s) that the compliance standard is applied on. (2) - Select a Control to see more details. Expand the control to view the assessments associated with the selected control. Select an assessment to view the list of resources associated and the actions to remediate compliance concerns. (3)-- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)
+- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4)
- In the Your Actions tab, you can see the automated and manual assessments associated to the control. (5) - Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6) - The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7)
The regulatory compliance has both automated and manual assessments that may nee
1. Select a compliance control to expand it.
-1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
+1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue.
1. Select a particular resource to view more details and resolve the recommendation for that resource. <br>For example, in the **Azure CIS 1.1.0** standard, select the recommendation **Disk encryption should be applied on virtual machines**.
The regulatory compliance has both automated and manual assessments that may nee
For more information about how to apply recommendations, see [Implementing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md).
-1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
+1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves.
> [!NOTE] > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment.
The regulatory compliance has automated and manual assessments that may need to
:::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard-ga.png" alt-text="Filtering the list of available Azure Audit reports using tabs and filters.":::
- For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
+ For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
> [!NOTE] > When you download one of these certification reports, you'll be shown the following privacy notice:
- >
+ >
> _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._ ### Check compliance offerings status
-Transparency provided by the compliance offerings (currently in preview) , allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
+Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
**To check the compliance offerings status**:
Use continuous export data to an Azure Event Hubs or a Log Analytics workspace:
:::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png" alt-text="Continuously export a weekly snapshot of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png"::: > [!TIP]
-> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance)
+> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance)
## Run workflow automations when there are changes to your compliance
For example, you might want Defender for Cloud to email a specific user when a c
In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory compliance dashboard to: > [!div class="checklist"]
-> * View and monitor your compliance posture regarding the standards and regulations that are important to you.
-> * Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
+>
+> - View and monitor your compliance posture regarding the standards and regulations that are important to you.
+> - Improve your compliance status by resolving relevant recommendations and watching the compliance score improve.
The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multicloud environment.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 08/07/2023 Last updated : 08/17/2023 # What's new in Microsoft Defender for Cloud?
Updates in August include:
|Date |Update | |-|-|
+| August 17 | [Extended properties in Defender for Cloud security alerts are masked from activity logs](#extended-properties-in-defender-for-cloud-security-alerts-are-masked-from-activity-logs)
+| August 15 | [Preview release of GCP support in Defender CSPM](#preview-release-of-gcp-support-in-defender-cspm)|
| August 7 | [New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions](#new-security-alerts-in-defender-for-servers-plan-2-detecting-potential-attacks-abusing-azure-virtual-machine-extensions)
+| August 1 | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) |
+
+### Extended properties in Defender for Cloud security alerts are masked from activity logs
+
+August 17, 2023
+
+We recently changed the way security alerts and activity logs are integrated. To better protect sensitive customer information, we no longer include this information in activity logs. Instead, we mask it with asterisks. However, this information is still available through the alerts API, continuous export, and the Defender for Cloud portal.
+
+Customers who rely on activity logs to export alerts to their SIEM solutions should consider using a different solution, as it isn't the recommended method for exporting Defender for Cloud security alerts.
+
+For instructions on how to export Defender for Cloud security alerts to SIEM, SOAR and other third party applications, see [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
+
+### Preview release of GCP support in Defender CSPM
+
+August 15, 2023
+
+We're announcing the preview release of the Defender CSPM contextual cloud security graph and attack path analysis with support for GCP resources. You can leverage the power of Defender CSPM for comprehensive visibility and intelligent cloud security across GCP resources.
+
+ Key features of our GCP support include:
+
+- **Attack path analysis** - Understand the potential routes attackers might take.
+- **Cloud security explorer** - Proactively identify security risks by running graph-based queries on the security graph.
+- **Agentless scanning** - Scan servers and identify secrets and vulnerabilities without installing an agent.
+- **Data-aware security posture** - Discover and remediate risks to sensitive data in Google Cloud Storage buckets.
+
+Learn more about [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options).
### New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions
Azure virtual machine extensions are small applications that run post-deployment
- Resetting credentials and creating administrative users - Encrypting disks
-Here is a table of the new alerts.
+Here's a table of the new alerts.
|Alert (alert type)|Description|MITRE tactics|Severity| |-|-|-|-|
-| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines are not equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |
-| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July, 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
+| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines aren't equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |
+| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
-| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
+| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium |
| **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | | **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | | **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | | **Suspicious usage of VMAccess extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VMAccess extension was detected on your virtual machines. Attackers may abuse the VMAccess extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |
-| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
+| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
| **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | | **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |
- See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions).
+ See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions).
For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
+### Business model and pricing updates for Defender for Cloud plans
+
+August 1, 2023
+
+Microsoft Defender for Cloud has three plans that offer service layer protection:
+
+- Defender for Key Vault
+
+- Defender for Azure Resource Manager
+
+- Defender for DNS
+
+These plans are transitioning to a new business model with different pricing and packaging to address customer feedback regarding spending predictability and simplifying the overall cost structure.
+
+**Business model and pricing changes summary**:
+
+Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS keep their current business model and pricing unless they actively choose to switch to the new business model and price.
+
+- **Defender for Azure Resource Manager**: This plan has a fixed price per subscription per month. Customers can switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.
+
+Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS keep their current business model and pricing unless they actively choose to switch to the new business model and price.
+
+- **Defender for Azure Resource Manager**: This plan has a fixed price per subscription per month. Customers can switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.
+
+- **Defender for Key Vault**: This plan has a fixed price per vault at per month with no overage charge. Customers can switch to the new business model by selecting the Defender for Key Vault new per-vault model
+
+- **Defender for DNS**: Defender for Servers Plan 2 customers gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS are no longer charged for Defender for DNS. Defender for DNS is no longer available as a standalone plan.
+
+For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h)
+ ## July 2023 Updates in July include:
defender-for-cloud Subassessment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md
++
+ Title: Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
+description: Learn about container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
++ Last updated : 08/16/2023+++
+# Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
+
+API Version: 2019-01-01-preview
+
+Get security subassessments on all your scanned resources inside a scope.
+
+## Overview
+
+You can access vulnerability assessment results pragmatically for both registry and runtime recommendations using the subassessments rest API.
+
+For more information on how to get started with our REST API, see [Azure REST API reference](/rest/api/azure/). Use the following information for specific information for the container vulnerability assessment results powered by Microsoft Defender Vulnerability Management.
+
+## HTTP Requests
+
+### Get
+
+#### GET
+
+`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments/{subAssessmentName}?api-version=2019-01-01-preview`
+
+#### URI Parameters
+
+| Name | In | Required | Type | Description |
+| -- | -- | -- | | |
+| assessmentName | path | True | string | The Assessment Key - Unique key for the assessment type |
+| scope | path | True | string | Scope of the query. Can be subscription (/subscriptions/0b06d9ea-afe6-4779-bd59-30e5c2d9d13f) or management group (/providers/Microsoft.Management/managementGroups/mgName). |
+| subAssessmentName | path | True | string | The Sub-Assessment Key - Unique key for the subassessment type |
+| api-version | query | True | string | API version for the operation |
+
+#### Responses
+
+| Name | Type | Description |
+| - | | - |
+| 200 OK | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/get#securitysubassessment) | OK |
+| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/get#clouderror) | Error response describing why the operation failed. |
+
+### List
+
+#### GET
+
+`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments?api-version=2019-01-01-preview`
+
+#### URI parameters
+
+| **Name** | **In** | **Required** | **Type** | **Description** |
+| | | | -- | |
+| **assessmentName** | path | True | string | The Assessment Key - Unique key for the assessment type |
+| **scope** | path | True | string | Scope of the query. The scope for AzureContainerVulnerability is the registry itself. |
+| **api-version** | query | True | string | API version for the operation |
+
+#### Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [SecuritySubAssessmentList](/rest/api/defenderforcloud/sub-assessments/list#securitysubassessmentlist) | OK |
+| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/list#clouderror) | Error response describing why the operation failed. |
+
+## Security
+
+### azure_auth
+
+Azure Active Directory OAuth2 Flow
+
+Type: oauth2
+Flow: implicit
+Authorization URL: `https://login.microsoftonline.com/common/oauth2/authorize`
+
+Scopes
+
+| Name | Description |
+| | -- |
+| user_impersonation | impersonate your user account |
+
+### Example
+
+### HTTP
+
+#### GET
+
+`https://management.azure.com/subscriptions/ 6ebb89c4-0e91-4f62-888f-c9518e662293/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry/providers/Microsoft.Security/assessments/ cf02effd-8e33-4b84-a012-1e61cf1a5638/subAssessments?api-version=2019-01-01-preview`
+
+#### Sample Response
+
+```json
+{
+ "value": [
+ {
+ "type": "Microsoft.Security/assessments/subAssessments",
+ "id": "/subscriptions/3905431d-c062-4c17-8fd9-c51f89f334c4/resourceGroups/PytorchEnterprise/providers/Microsoft.ContainerRegistry/registries/ptebic/providers/Microsoft.Security/assessments/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5/subassessments/3f069764-2777-3731-9698-c87f23569a1d",
+ "name": "3f069764-2777-3731-9698-c87f23569a1d",
+ "properties": {
+ "id": "CVE-2021-39537",
+ "displayName": "CVE-2021-39537",
+ "status": {
+ "code": "NotApplicable",
+ "severity": "High",
+ "cause": "Exempt",
+ "description": "Disabled parent assessment"
+ },
+ "remediation": "Create new image with updated package libncursesw5 with version 6.2-0ubuntu2.1 or higher.",
+ "description": "This vulnerability affects the following vendors: Gnu, Apple, Red_Hat, Ubuntu, Debian, Suse, Amazon, Microsoft, Alpine. To view more details about this vulnerability please visit the vendor website.",
+ "timeGenerated": "2023-08-08T08:14:13.742742Z",
+ "resourceDetails": {
+ "source": "Azure",
+ "id": "/repositories/public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121/images/sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6"
+ },
+ "additionalData": {
+ "assessedResourceType": "AzureContainerRegistryVulnerability",
+ "artifactDetails": {
+ "repositoryName": "public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121",
+ "registryHost": "ptebic.azurecr.io",
+ "digest": "sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6",
+ "tags": [
+ "biweekly.202305.2"
+ ],
+ "artifactType": "ContainerImage",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "lastPushedToRegistryUTC": "2023-05-15T16:00:40.2938142Z"
+ },
+ "softwareDetails": {
+ "osDetails": {
+ "osPlatform": "linux",
+ "osVersion": "ubuntu_linux_20.04"
+ },
+ "packageName": "libncursesw5",
+ "category": "OS",
+ "fixReference": {
+ "id": "USN-6099-1",
+ "url": "https://ubuntu.com/security/notices/USN-6099-1",
+ "description": "USN-6099-1: ncurses vulnerabilities 2023 May 23",
+ "releaseDate": "2023-05-23T00:00:00+00:00"
+ },
+ "vendor": "ubuntu",
+ "version": "6.2-0ubuntu2",
+ "evidence": [
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s",
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s"
+ ],
+ "language": "",
+ "fixedVersion": "6.2-0ubuntu2.1",
+ "fixStatus": "FixAvailable"
+ },
+ "vulnerabilityDetails": {
+ "cveId": "CVE-2021-39537",
+ "references": [
+ {
+ "title": "CVE-2021-39537",
+ "link": "https://nvd.nist.gov/vuln/detail/CVE-2021-39537"
+ }
+ ],
+ "cvss": {
+ "2.0": null,
+ "3.0": {
+ "base": 7.8,
+ "cvssVectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H/E:P/RL:U/RC:R"
+ }
+ },
+ "workarounds": [],
+ "publishedDate": "2020-08-04T00:00:00",
+ "lastModifiedDate": "2023-07-07T00:00:00",
+ "severity": "High",
+ "cpe": {
+ "uri": "cpe:2.3:a:ubuntu:libncursesw5:*:*:*:*:*:ubuntu_linux_20.04:*:*",
+ "part": "Applications",
+ "vendor": "ubuntu",
+ "product": "libncursesw5",
+ "version": "*",
+ "update": "*",
+ "edition": "*",
+ "language": "*",
+ "softwareEdition": "*",
+ "targetSoftware": "ubuntu_linux_20.04",
+ "targetHardware": "*",
+ "other": "*"
+ },
+ "weaknesses": {
+ "cwe": [
+ {
+ "id": "CWE-787"
+ }
+ ]
+ },
+ "exploitabilityAssessment": {
+ "exploitStepsVerified": false,
+ "exploitStepsPublished": false,
+ "isInExploitKit": false,
+ "types": [],
+ "exploitUris": []
+ }
+ },
+ "cvssV30Score": 7.8
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+| Name | Description |
+| | |
+| AzureResourceDetails | Details of the Azure resource that was assessed |
+| CloudError | Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This definition also follows the OData error response format.). |
+| CloudErrorBody | The error detail |
+| AzureContainerVulnerability | More context fields for container registry Vulnerability assessment |
+| CVE | CVE Details |
+| CVSS | CVSS Details |
+| ErrorAdditionalInfo | The resource management error additional info. |
+| SecuritySubAssessment | Security subassessment on a resource |
+| SecuritySubAssessmentList | List of security subassessments |
+| ArtifactDetails | Details for the affected container image |
+| SoftwareDetails | Details for the affected software package |
+| FixReference | Details on the fix, if available |
+| OS Details | Details on the os information |
+| VulnerabilityDetails | Details on the detected vulnerability |
+| CPE | Common Platform Enumeration |
+| Cwe | Common weakness enumeration |
+| VulnerabilityReference | Reference links to vulnerability |
+| ExploitabilityAssessment | Reference links to an example exploit |
+
+### AzureContainerRegistryVulnerability (MDVM)
+
+Additional context fields for Azure container registry vulnerability assessment
+
+| **Name** | **Type** | **Description** |
+| -- | -- | -- |
+| assessedResourceType | string: AzureContainerRegistryVulnerability | Subassessment resource type |
+| cvssV30Score | Numeric | CVSS V3 Score |
+| vulnerabilityDetails | VulnerabilityDetails | |
+| artifactDetails | ArtifactDetails | |
+| softwareDetails | SoftwareDetails | |
+
+### ArtifactDetails
+
+Context details for the affected container image
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| repositoryName | String | Repository name |
+| RepositoryHost | String | Repository host |
+| lastPublishedToRegistryUTC | Timestamp | UTC timestamp for last publish date |
+| artifactType | String: ContainerImage | |
+| mediaType | String | Layer media type |
+| Digest | String | Digest of vulnerable image |
+| Tags | String[] | Tags of vulnerable image |
+
+### Software Details
+
+Details for the affected software package
+
+| **Name** | **Type** | **Description** |
+| | | |
+| fixedVersion | String | Fixed Version |
+| category | String | Vulnerability category ΓÇô OS or Language |
+| osDetails | OsDetails | |
+| language | String | Language of affected package (for example, Python, .NET) could also be empty |
+| version | String | |
+| vendor | String | |
+| packageName | String | |
+| fixStatus | String | Unknown, FixAvailable, NoFixAvailable, Scheduled, WontFix |
+| evidence | String[] | Evidence for the package |
+| fixReference | FixReference | |
+
+### FixReference
+
+Details on the fix, if available
+
+| **Name** | **Type** | **description** |
+| -- | | |
+| ID | String | Fix ID |
+| Description | String | Fix Description |
+| releaseDate | Timestamp | Fix timestamp |
+| url | String | URL to fix notification |
+
+### OS Details
+
+Details on the os information
+
+| **Name** | **Type** | **Description** |
+| - | -- | -- |
+| osPlatform | String | For example: Linux, Windows |
+| osName | String | For example: Ubuntu |
+| osVersion | String | |
+
+### VulnerabilityDetails
+
+Details on the detected vulnerability
+
+| **Severity** | **Severity** | **The sub-assessment severity level** |
+| | -- | - |
+| LastModifiedDate | Timestamp | |
+| publishedDate | Timestamp | Published date |
+| ExploitabilityAssessment | ExploitabilityAssessment | |
+| CVSS | Dictionary <string, CVSS> | Dictionary from cvss version to cvss details object |
+| Workarounds | Workaround[] | Published workarounds for vulnerability |
+| References | VulnerabilityReference | |
+| Weaknesses | Weakness[] | |
+| cveId | String | CVE ID |
+| Cpe | CPE | |
+
+### CPE (Common Platform Enumeration)
+
+| **Name** | **Type** | **Description** |
+| | -- | |
+| language | String | Language tag |
+| softwareEdition | String | |
+| Version | String | Package version |
+| targetSoftware | String | Target Software |
+| vendor | String | Vendor |
+| product | String | Product |
+| edition | String | |
+| update | String | |
+| other | String | |
+| part | String | Applications Hardware OperatingSystems |
+| uri | String | CPE 2.3 formatted uri |
+
+### Weakness
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| Cwe | Cwe[] | |
+
+### Cwe (Common weakness enumeration)
+
+CWE details
+
+| **Name** | **Type** | **description** |
+| -- | -- | |
+| ID | String | CWE ID |
+
+### VulnerabilityReference
+
+Reference links to vulnerability
+
+| **Name** | **Type** | **Description** |
+| -- | -- | - |
+| link | String | Reference url |
+| title | String | Reference title |
+
+### ExploitabilityAssessment
+
+Reference links to an example exploit
+
+| **Name** | **Type** | **Description** |
+| | -- | |
+| exploitUris | String[] | |
+| exploitStepsPublished | Boolean | Had the exploits steps been published |
+| exploitStepsVerified | Boolean | Had the exploit steps verified |
+| isInExploitKit | Boolean | Is part of the exploit kit |
+| types | String[] | Exploit types, for example: NotAvailable, Dos, Local, Remote, WebApps, PrivilegeEscalation |
+
+### AzureResourceDetails
+
+Details of the Azure resource that was assessed
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| ID | string | Azure resource ID of the assessed resource |
+| source | string: Azure | The platform where the assessed resource resides |
+
+### CloudError
+
+Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This response also follows the OData error response format.).
+
+| **Name** | **Type** | **Description** |
+| -- | | -- |
+| error.additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. |
+| error.code | string | The error code. |
+| error.details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#clouderrorbody)[] | The error details. |
+| error.message | string | The error message. |
+| error.target | string | The error target. |
+
+### CloudErrorBody
+
+The error detail.
+
+| **Name** | **Type** | **Description** |
+| -- | | -- |
+| additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. |
+| code | string | The error code. |
+| details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list#clouderrorbody)[] | The error details. |
+| message | string | The error message. |
+| target | string | The error target. |
+
+### ErrorAdditionalInfo
+
+The resource management error additional info.
+
+| **Name** | **Type** | **Description** |
+| -- | -- | - |
+| info | object | The additional info. |
+| type | string | The additional info type. |
+
+### SecuritySubAssessment
+
+Security subassessment on a resource
+
+| **Name** | **Type** | **Description** |
+| -- | | |
+| ID | string | Resource ID |
+| name | string | Resource name |
+| properties.additionalData | AdditionalData: AzureContainerRegistryVulnerability | Details of the subassessment |
+| properties.category | string | Category of the subassessment |
+| properties.description | string | Human readable description of the assessment status |
+| properties.displayName | string | User friendly display name of the subassessment |
+| properties.id | string | Vulnerability ID |
+| properties.impact | string | Description of the impact of this subassessment |
+| properties.remediation | string | Information on how to remediate this subassessment |
+| properties.resourceDetails | ResourceDetails: [AzureResourceDetails](/rest/api/defenderforcloud/sub-assessments/list#azureresourcedetails) | Details of the resource that was assessed |
+| properties.status | [SubAssessmentStatus](/rest/api/defenderforcloud/sub-assessments/list#subassessmentstatus) | Status of the subassessment |
+| properties.timeGenerated | string | The date and time the subassessment was generated |
+| type | string | Resource type |
+
+### SecuritySubAssessmentList
+
+List of security subassessments
+
+| **Name** | **Type** | **Description** |
+| -- | | - |
+| nextLink | string | The URI to fetch the next page. |
+| value | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#securitysubassessment)[] | Security subassessment on a resource |
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
In the support table, **NA** indicates that the feature isn't available.
[Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA [Defender for APIs](defender-for-apis-introduction.md). [Review support preview regions](defender-for-apis-prepare.md#cloud-and-region-support). | Preview | NA | NA [Defender for App Service](defender-for-app-service-introduction.md) | GA | NA | NA
-[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Preview | NA | NA
+[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | GA | NA | NA
[Defender for Azure SQL database servers](defender-for-sql-introduction.md) | GA | GA | GA<br/><br/>A subset of alerts/vulnerability assessments is available.<br/>Behavioral threat protection isn't available. [Defender for Containers](defender-for-containers-introduction.md)<br/>[Review detailed feature support](support-matrix-defender-for-containers.md) | GA | GA | GA [Defender for DevOps](defender-for-devops-introduction.md) |Preview | NA | NA
defender-for-cloud Support Matrix Defender For Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-storage.md
description: Learn about the permissions required to enable Defender for Storage
Previously updated : 08/14/2023 Last updated : 08/21/2023 # Required permissions for enabling Defender for Storage and its features
-This article lists the permissions required to [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) and its features.
+This article lists the permissions required to [enable Defender for Storage](tutorial-enable-storage-plan.md) and its features.
Microsoft Defender for Storage is an Azure-native layer of security intelligence that detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
defender-for-cloud Tutorial Enable Databases Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-databases-plan.md
Database protection includes:
- [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)
-Defender for Databases protects four database protection plans at their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Databases protects four database protection plans at their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
Defender for Databases protects four database protection plans at their own cost
When you enable database protection, you enable all four of the Defender plans and protect all of the supported databases on your subscription.
-**To enable Defender for App Service on your subscription**:
+**To enable Defender for Databases on your subscription**:
1. Sign in to the [Azure portal](https://portal.azure.com).
defender-for-cloud Tutorial Enable Storage Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md
Title: Protect your storage accounts with the Microsoft Defender for Storage plan description: Learn how to enable the Defender for Storage on your Azure subscription for Microsoft Defender for Cloud. Previously updated : 08/01/2023 Last updated : 08/21/2023 # Deploy Microsoft Defender for Storage
To enable and configure Microsoft Defender for Storage and ensure maximum protec
> [!TIP] > The Malware Scanning feature has advanced configurations to help security teams support different workflows and requirements. -- [Override subscription-level settings to configure specific storage accounts](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#override-defender-for-storage-subscription-level-settings) with custom configurations that differ from the settings configured at the subscription level.
+- [Override subscription-level settings to configure specific storage accounts](advanced-configurations-for-malware-scanning.md#override-defender-for-storage-subscription-level-settings) with custom configurations that differ from the settings configured at the subscription level.
-There are several ways to enable and configure Defender for Storage: [Azure built-in policy](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-at-scale-with-an-azure-built-in-policy) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#bicep-template) and [ARM template](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#arm-template), using the [Azure portal](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal), or directly with [REST API](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-with-rest-api).
+There are several ways to enable and configure Defender for Storage: [Azure built-in policy](defender-for-storage-policy-enablement.md) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#bicep-template) and [ARM template](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#azure-resource-manager-template), using the [Azure portal](defender-for-storage-azure-portal-enablement.md?tabs=enable-subscription), or directly with [REST API](defender-for-storage-rest-api-enablement.md?tabs=enable-subscription).
Enabling Defender for Storage via a policy is recommended because it facilitates enablement at scale and ensures that a consistent security policy is applied across all existing and future storage accounts within the defined scope (such as entire management groups). This keeps the storage accounts protected with Defender for Storage according to the organization's defined configuration.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 08/14/2023 Last updated : 08/17/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023| | [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | August 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | August 2023 |
-| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 |
| [Update naming format of Azure Center for Internet Security standards in regulatory compliance](#update-naming-format-of-azure-center-for-internet-security-standards-in-regulatory-compliance) | August 2023 | | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 | | [Deprecate and replace recommendations App Service Client Certificates](#deprecate-and-replace-recommendations-app-service-client-certificates) | August 2023 | | [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | September 2023 |
+| [Replacing secret scanning recommendation results in Defender for DevOps from CredScan with GitHub Advanced Security for Azure DevOps powered secret scanning](#replacing-secret-scanning-recommendation-results-in-defender-for-devops-from-credscan-with-github-advanced-security-for-azure-devops-powered-secret-scanning) | September 2023 |
| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | August 2024 |
+### Replacing secret scanning recommendation results in Defender for DevOps from CredScan with GitHub Advanced Security for Azure DevOps powered secret scanning
+
+**Estimated date for change: September 2023**
+
+Currently, the recommendations for secret scanning in Azure DevOps repositories by Defender for DevOps are based on the results of CredScan, which is manually run using the Microsoft Security DevOps Extension. However, this mechanism of running secret scanning is being deprecated in September 2023. Instead, you can see secret scanning results generated by GitHub Advanced Security for Azure DevOps (GHAzDO).
+
+As GHAzDO enters Public Preview, we're working towards unifying the secret scanning experience across both GitHub Advanced Security and GHAzDO. This unification enables you to receive detections across all branches, git history, and secret leak protection via push protection to your repositories. This process can all be done with a single button press, without requiring any pipeline runs.
+
+For more information about GHAzDO Secret Scanning, see [Set up secret scanning](/azure/devops/repos/security/configure-github-advanced-security-features#set-up-secret-scanning).
+ ### Classic connectors for multicloud will be retired **Estimated date for change: September 15, 2023**
Customers that rely on the `resourceID` to query DevOps recommendation data will
Queries will need to be updated to include both the old and new `resourceID` to show both, for example, total over time.
-Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations.
+Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations. The template DevOps workbook is planned to be updated to reflect the new recommendations, although during the actual migration, customers may experience some errors with the workbook.
-The recommendations page's experience will have minimal impact and deprecated assessments may continue to show for a maximum of 14 days if new scan results aren't submitted.
+The experience on the recommendations page will be impacted and require customers to query under "All recommendations" to view the new DevOps recommendations. For Azure DevOps, deprecated assessments may continue to show for a maximum of 14 days if new pipelines are not run. Refer to [Defender for DevOps Common questions](/azure/defender-for-cloud/faq-defender-for-devops#why-don-t-i-see-recommendations-for-findings-) for details.
### DevOps Resource Deduplication for Defender for DevOps
If you don't have an instance of a DevOps organization onboarded more than once
Customers will have until July 31, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps.
-### Business model and pricing updates for Defender for Cloud plans
-
-**Estimated date for change: August 2023**
-
-Microsoft Defender for Cloud has three plans that offer service layer protection:
--- Defender for Key Vault--- Defender for Azure Resource Manager--- Defender for DNS-
-These plans are transitioning to a new business model with different pricing and packaging to address customer feedback regarding spending predictability and simplifying the overall cost structure.
-
-**Business model and pricing changes summary**:
-
-Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price.
--- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.-
-Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price.
--- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.--- **Defender for Key Vault**: This plan will have a fixed price per vault at per month with no overage charge. Customers will have the option to switch to the new business model by selecting the Defender for Key Vault new per-vault model--- **Defender for DNS**: Defender for Servers Plan 2 customers will gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS will no longer be charged for Defender for DNS. Defender for DNS will no longer be available as a standalone plan.-
-For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h)
- ### Update naming format of Azure Center for Internet Security standards in regulatory compliance **Estimated date for change: August 2023**
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
Use the following commands to restore data on your OT network sensor using the m
|User |Command |Full command syntax | |||| |**support** | `system restore` | No attributes |
-|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | No attributes |
+|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | `-f` `<filename>` |
For example, for the *support* user:
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: OT monitoring software versions - Microsoft Defender for IoT description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features. Previously updated : 07/03/2023 Last updated : 08/09/2023 # OT monitoring software versions
This version includes the following updates and enhancements:
- [UI enhancements for downloading PCAP files from the sensor](how-to-view-alerts.md#access-alert-pcap-data) - [*cyberx* and *cyberx_host* users aren't enabled by default](roles-on-premises.md#default-privileged-on-premises-users)
+> [!NOTE]
+> Due to internal improvements to the OT sensor's device inventory, column edits made to your device inventory aren't retained after updating to version 23.1.2. If you'd previously edited the columns shown in your device inventory, you'll need to make those same edits again after updating your sensor.
+>
+ ## Versions 22.3.x ### 22.3.10
deployment-environments Tutorial Deploy Environments In Cicd Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md
You use a workflow that features three branches: main, dev, and test.
This workflow is a small example for the purposes of this tutorial. Real world workflows may be more complex.
+Before beginning this tutorial, you can familiarize yourself with Deployment Environments resources and concepts by reviewing [Key concepts for Azure Deployment Environments](/azure/deployment-environments/concept-environments-key-concepts).
+ In this tutorial, you learn how to: > [!div class="checklist"]
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Last updated 02/20/2020 -
- - seo-lt-2019
- - ignite-2022
+ # Troubleshoot common Azure Database Migration Service issues and errors
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-active-directory.md
Title: Authenticate Event Grid publishing clients using Azure Active Directory
description: This article describes how to authenticate Azure Event Grid publishing client using Azure Active Directory. Previously updated : 01/05/2022 Last updated : 08/17/2023 # Authentication and authorization with Azure Active Directory
With RBAC privileges taken care of, you can now [build your client application t
Use [Event Grid's data plane SDK](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) to publish events to Event Grid. Event Grid's SDK support all authentication methods, including Azure AD authentication.
+Here's the sample code that publishes events to Event Grid using the .NET SDK. You can get the topic endpoint on the **Overview** page for your Event Grid topic in the Azure portal. It's in the format: `https://<TOPIC-NAME>.<REGION>-1.eventgrid.azure.net/api/events`.
+
+```csharp
+ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredential();
+EventGridPublisherClient client = new EventGridPublisherClient( new Uri("<TOPIC ENDPOINT>"), managedIdentityCredential);
++
+EventGridEvent egEvent = new EventGridEvent(
+ "ExampleEventSubject",
+ "Example.EventType",
+ "1.0",
+ "This is the event data");
+
+// Send the event
+await client.SendEventAsync(egEvent);
+```
+ ### Prerequisites Following are the prerequisites to authenticate to Event Grid.
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 03/01/2023 Last updated : 08/16/2023 # Deliver events using private link service
To deliver events to Storage queues using managed identity, follow these steps:
1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/blobs/assign-azure-role-data-access.md) role on Azure Storage queue. 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned or user-assigned managed identity.
-> [!NOTE]
-> - If there's no firewall or virtual network rules configured for the Azure Storage account, you can use both user-assigned and system-assigned identities to deliver events to the Azure Storage account.
-> - If a firewall or virtual network rule is configured for the Azure Storage account, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the storage account. You can't use user-assigned managed identity whether this option is enabled or not.
+## Firewall and virtual network rules
+If there's no firewall or virtual network rules configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use both user-assigned and system-assigned identities to deliver events.
+
+If a firewall or virtual network rule is configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the destinations. You can't use user-assigned managed identity whether this option is enabled or not.
## Next steps For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
event-grid Mqtt Client Authorization Use Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-authorization-use-rbac.md
- Title: RBAC authorization for clients with Azure AD identity
-description: Describes RBAC roles to authorize clients with Azure AD identity to publish or subscribe MQTT messages
- Previously updated : 8/11/2023----
-# Authorizing access to publish or subscribe to MQTT messages in Event Grid namespace
-You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Azure Active Directory identity, to publish or subscribe access to specific topic spaces.
-
-## Prerequisites
-- You need an Event Grid namespace with MQTT enabled. [Learn about creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace)-- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)-
-## Operation types
-You can use following two data actions to provide publish or subscribe permissions to clients with Azure AD identities on specific topic spaces.
-
-**Topic spaces publish** data action
-Microsoft.EventGrid/topicSpaces/publish/action
-
-**Topic spaces subscribe** data action
-Microsoft.EventGrid/topicSpaces/subscribe/action
-
-> [!NOTE]
-> Currently, we recommend using custom roles with the actions provided.
-
-## Custom roles
-
-You can create custom roles using the publish and subscribe actions.
-
-The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope.
-
-**EventGridMQTTPublisherRole.json**: MQTT messages publish operation.
-
-```json
-{
- "roleName": "Event Grid namespace MQTT publisher",
- "description": "Event Grid namespace MQTT message publisher role",
- "assignableScopes": [
- "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
- ],
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/topicSpaces/publish/action"
- ],
- "notDataActions": []
- }
- ]
-}
-```
-
-**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation.
-
-```json
-{
- "roleName": "Event Grid namespace MQTT subscriber",
- "description": "Event Grid namespace MQTT message subscriber role",
- "assignableScopes": [
- "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
- ]
- "permissions": [
- {
- "actions": [],
- "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/topicSpaces/subscribe/action"
- ],
- "notDataActions": []
- }
- ]
-}
-```
-
-## Create custom roles in Event Grid namespace
-1. Navigate to topic spaces page in Event Grid namespace
-1. Select the topic space for which the custom RBAC role needs to be created
-1. Navigate to the Access control (IAM) page within the topic space
-1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name.
-1. Switch the Baseline permissions to **Start from scratch**
-1. On the Permissions tab, select **Add permissions**
-1. In the selection page, find and select Microsoft Event Grid
- :::image type="content" source="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions.":::
-1. Navigate to Data Actions
-1. Select **Topic spaces publish** data action and select **Add**
- :::image type="content" source="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection.":::
-1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed.
-1. Select **Create** in Review + create tab to create the custom role.
-1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal).
-
-> [!NOTE]
-> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space.
-
-## Next steps
-See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md)
event-grid Mqtt Client Azure Ad Token And Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-azure-ad-token-and-rbac.md
+
+ Title: JWT authentication and RBAC authorization for clients with Azure AD identity
+description: Describes JWT authentication and RBAC roles to authorize clients with Azure AD identity to publish or subscribe MQTT messages
+ Last updated : 8/11/2023++++
+# Authenticating and Authorizing access to publish or subscribe to MQTT messages
+You can authenticate MQTT clients with Azure AD JWT to connect to Event Grid namespace. You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Azure Active Directory identity, to publish or subscribe access to specific topic spaces.
+
+> [!IMPORTANT]
+> This feature is supported only when using MQTT v5
+
+## Prerequisites
+- You need an Event Grid namespace with MQTT enabled. Learn about [creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace)
+- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)
++
+## Authentication using Azure AD JWT
+You can use the MQTT v5 CONNECT packet to provide the Azure AD JWT token to authenticate your client, and you can use the MQTT v5 AUTH packet to refresh the token.
+
+In CONNECT packet, you can provide required values in the following fields:
+
+|Field | Value |
+|||
+|Authentication Method | OAUTH2-JWT |
+|Authentication Data | JWT token |
+
+In AUTH packet, you can provide required values in the following fields:
+
+|Field | Value |
+|||
+| Authentication Method | OAUTH2-JWT |
+| Authentication Data | JWT token |
+| Authentication Reason Code | 25 |
+
+Authenticate Reason Code with value 25 signifies reauthentication.
+
+> [!NOTE]
+> Audience: ΓÇ£audΓÇ¥ claim must be set to "https://eventgrid.azure.net/".
+
+## Authorization to grant access permissions
+A client using Azure AD based JWT authentication needs to be authorized to communicate with the Event Grid namespace. You can create custom roles to enable the client to communicate with Event Grid instances in your resource group, and then assign the roles to the client. You can use following two data actions to provide publish or subscribe permissions, to clients with Azure AD identities, on specific topic spaces.
+
+**Topic spaces publish** data action
+Microsoft.EventGrid/topicSpaces/publish/action
+
+**Topic spaces subscribe** data action
+Microsoft.EventGrid/topicSpaces/subscribe/action
+
+> [!NOTE]
+> Currently, we recommend using custom roles with the actions provided.
+
+### Custom roles
+
+You can create custom roles using the publish and subscribe actions.
+
+The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope.
+
+**EventGridMQTTPublisherRole.json**: MQTT messages publish operation.
+
+```json
+{
+ "roleName": "Event Grid namespace MQTT publisher",
+ "description": "Event Grid namespace MQTT message publisher role",
+ "assignableScopes": [
+ "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
+ ],
+ "permissions": [
+ {
+ "actions": [],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.EventGrid/topicSpaces/publish/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+}
+```
+
+**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation.
+
+```json
+{
+ "roleName": "Event Grid namespace MQTT subscriber",
+ "description": "Event Grid namespace MQTT message subscriber role",
+ "assignableScopes": [
+ "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>"
+ ]
+ "permissions": [
+ {
+ "actions": [],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.EventGrid/topicSpaces/subscribe/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+}
+```
+
+## Create custom roles
+1. Navigate to topic spaces page in your Event Grid namespace
+1. Select the topic space for which the custom RBAC role needs to be created
+1. Navigate to the Access control (IAM) page within the topic space
+1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name.
+1. Switch the Baseline permissions to **Start from scratch**
+1. On the Permissions tab, select **Add permissions**
+1. In the selection page, find and select Microsoft Event Grid
+ :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions.":::
+1. Navigate to Data Actions
+1. Select **Topic spaces publish** data action and select **Add**
+ :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection.":::
+1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed.
+1. Select **Create** in Review + create tab to create the custom role.
+1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal).
+
+## Assign the custom role to your Azure AD identity
+1. In the Azure portal, navigate to your Event Grid namespace
+1. Navigate to the topic space to which you want to authorize access.
+1. Go to the Access control (IAM) page of the topic space
+1. Select the **Role assignments** tab to view the role assignments at this scope.
+1. Select **+ Add** and Add role assignment.
+1. On the Role tab, select the role that you created in the previous step.
+1. On the Members tab, select User, group, or service principal to assign the selected role to one or more service principals (applications).
+ - Users and groups work when user/group belong to fewer than 200 groups.
+1. Select **Select members**.
+1. Find and select the users, groups, or service principals.
+1. Select **Review + assign** on the Review + assign tab.
+
+> [!NOTE]
+> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space.
+
+## Next steps
+- See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md)
+- To learn more about how Managed Identities work, you can refer to [How managed identities for Azure resources work with Azure virtual machines - Microsoft Entra](/azure/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm)
+- To learn more about how to obtain tokens from Azure AD, you can refer to [obtaining Azure AD tokens](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#get-a-token)
+- To learn more about Azure Identity client library, you can refer to [using Azure Identity client library](/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-the-azure-identity-client-library)
+- To learn more about implementing an interface for credentials that can provide a token, you can refer to [TokenCredential Interface](/java/api/com.azure.core.credential.tokencredential)
+- To learn more about how to authenticate using Azure Identity, you can refer to [examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples)
event-grid Mqtt Publish And Subscribe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md
step certificate fingerprint client1-authnID.pem
1. On the Review + create tab of the Create namespace page, select **Create**. > [!NOTE]
- > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see Create a Namespace.
+ > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see Create a Namespace.
+ 1. After the deployment succeeds, select **Go to resource** to navigate to the Event Grid Namespace Overview page for your namespace. 1. In the Overview page, you see that the MQTT is in Disabled state. To enable MQTT, select the **Disabled** link, it will redirect you to Configuration page. 1. On Configuration page, select the Enable MQTT option, and Apply the settings.
step certificate fingerprint client1-authnID.pem
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-client1-metadata.png" alt-text="Screenshot of client 1 configuration."::: 6. Select **Create** to create the client.
-7. Repeat the above steps to create another client called ΓÇ£client2ΓÇ¥.
+7. Repeat the above steps to create another client called "client2".
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-client2-metadata.png" alt-text="Screenshot of client 2 configuration.":::
step certificate fingerprint client1-authnID.pem
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-1.png" alt-text="Screenshot showing creation of first permission binding."::: 4. Select **Create** to create the permission binding. 5. Create one more permission binding by selecting **+ Permission binding** on the toolbar.
-6. Provide a name and give $all client group Subscriber access to the Topicspace1 as shown.
+6. Provide a name and give $all client group Subscriber access to the "Topicspace1" as shown.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-2.png" alt-text="Screenshot showing creation of second permission binding."::: 7. Select **Create** to create the permission binding.
step certificate fingerprint client1-authnID.pem
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-2.png" alt-text="Screenshot showing client 1 configuration part 2 on MQTTX app."::: 1. Select Connect to connect the client to the Event Grid MQTT service.
-1. Repeat the above steps to connect the second client ΓÇ£client2ΓÇ¥, with corresponding authentication information as shown.
+1. Repeat the above steps to connect the second client "client2", with corresponding authentication information as shown.
:::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client2-configuration-1.png" alt-text="Screenshot showing client 2 configuration part 1 on MQTTX app.":::
event-grid Powershell Webhook Secure Delivery Azure Ad App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-app.md
Title: Azure PowerShell - Secure WebHook delivery with Azure AD Application in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD Application using Azure Event Grid ms.devlang: powershell-+ Last updated 10/14/2021
event-grid Powershell Webhook Secure Delivery Azure Ad User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-user.md
Title: Azure PowerShell - Secure WebHook delivery with Azure AD User in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD User using Azure Event Grid ms.devlang: powershell-+ Last updated 09/29/2021
event-grid Secure Webhook Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md
Title: Secure WebHook delivery with Azure AD in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure Active Directory using Azure Event Grid + Last updated 10/12/2022
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
Approximately, one capacity unit (CU) in a legacy cluster provides *ingress capa
With Legacy cluster, you can purchase up to 20 CUs. > [!Note]
-> Event Hubs Dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+> Legacy Event Hubs Dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
> [!IMPORTANT] > Migrating an existing Legacy cluster to a Self-Serve Cluster isn't currently support. For more information, see [migrating a Legacy cluster to Self-Serve Scalable cluster.](#can-i-migrate-my-standard-or-premium-namespaces-to-a-dedicated-tier-cluster).
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md
Last updated 10/18/2021
This tutorial walks you through how to set up a change data capture based system on Azure using [Event Hubs](./event-hubs-about.md?WT.mc_id=devto-blog-abhishgu) (for Kafka), [Azure DB for PostgreSQL](../postgresql/overview.md) and Debezium. It will use the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html) to stream database modifications from PostgreSQL to Kafka topics in Event Hubs > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
In this tutorial, you take the following steps:
event-hubs Event Hubs Kafka Mirror Maker Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-mirror-maker-tutorial.md
This tutorial shows how to mirror a Kafka broker into an Azure Event Hubs using
> This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/mirror-maker) > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
In this tutorial, you learn how to: > [!div class="checklist"]
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
FastPath will honor UDRs configured on the GatewaySubnet and send traffic direct
> * FastPath UDR connectivity is not supported for Azure Dedicated Host workloads. > * FastPath UDR connectivity is not supported for IPv6 workloads.
+To enroll in the Public preview, please send an email **exrpm@microsoft.com** with the following information:
+- Azure subscription ID
+- Virtual Network(s) Azure Resource ID(s)
+- ExpressRoute Circuit(s) Azure Resource ID(s)
+- ExpressRoute Connection(s) Azure Resource ID(s)
+- Number of Virtual Network peering connections
+- Number of UDRs configured in the hub Virtual Network
+ ### Private Link Connectivity for 10Gbps ExpressRoute Direct Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path.
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard
ExpressRoute Local may not be available for an ExpressRoute Location. For peering location and supported Azure local region, see [locations and connectivity providers](expressroute-locations-providers.md#partners).
- > [!NOTE]
- > The restriction of Azure regions in the same metro doesn't apply for ExpressRoute Local in Virtual WAN.
- ### What are the benefits of ExpressRoute Local? While you need to pay egress data transfer for your Standard or Premium ExpressRoute circuit, you don't pay egress data transfer separately for your ExpressRoute Local circuit. In other words, the price of ExpressRoute Local includes data transfer fees. ExpressRoute Local is an economical solution if you have massive amount of data to transfer and want to have your data over a private connection to an ExpressRoute peering location near your desired Azure regions.
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
To enroll in the preview, send an email to **exrpm@microsoft.com**, providing th
* Azure Subscription ID * Virtual Network (VNet) Resource ID * ExpressRoute Circuit Resource ID
+* ExpressRoute Connection(s) Resource ID(s)
+* Number of Private Endpoints deployed to the local/Hub VNet.
+* Resource ID of any User-Defined-Routes (UDRs) configured in the local/Hub VNet.
**FastPath support for virtual network peering and UDRs is only available for ExpressRoute Direct connections**.
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
The following fields are included in the table in the **Values** section on the
Many organizations opt to obfuscate their registry information. Sometimes contact email addresses end in *@anonymised.email*. This placeholder is used instead of a real contact address. Many fields are optional during registration configuration, so any field with an empty value wasn't included by the registrant. ++
+### Change history
+
+The "Change history" tab displays a list of modifications that have been applied to an asset over time. This information helps you track these changes over time and better understand the lifecycle of the asset. This tab displays a variety of changes, including but not limited to asset states, labels and external IDs. For each change, we list the user who implemented the change and a timestamp.
+
+[ ![Screenshot that shows the Change history tab.](media/change-history-1.png) ](media/change-history-1.png#lightbox)
+++ ## Next steps - [Understand dashboards](understanding-dashboards.md)
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
All assets are labeled as one of the following states:
These asset states are uniquely processed and monitored to ensure that customers have clear visibility into the most critical assets by default. For instance, ΓÇ£Approved InventoryΓÇ¥ assets are always represented in dashboard charts and are scanned daily to ensure data recency. All other kinds of assets are not included in dashboard charts by default; however, users can adjust their inventory filters to view assets in different states as needed. Similarly, "CandidateΓÇ¥ assets are only scanned during the discovery process; itΓÇÖs important to review these assets and change their state to ΓÇ£Approved InventoryΓÇ¥ if they are owned by your organization. +
+## Tracking inventory changes
+
+Your attack surface is constantly changing, which is why Defender EASM continuously analyzes and updates your inventory to ensure accuracy. Assets are frequently added and removed from inventory, so it's important to track these changes to understand your attack surface and identify key trends. The inventory changes dashboard provides an overview of these changes, displaying the "added" and "removed" counts for each asset type. You can filter the dashboard by two date ranges: either the last 7 or 30 days. For a more granular view of these inventory changes, refer to the "Changes by date" section.
++
+[ ![Screenshot of Inventory Changes screen.](media/inventory-changes-1.png)](media/inventory-changes-1.png#lightbox)
++++ ## Next steps -- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Modifying inventory assets](labeling-inventory-assets.md)
- [Understanding asset details](understanding-asset-details.md) - [Using and managing discovery](using-and-managing-discovery.md)
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet --
## Next steps
+- For more information, see [Scale Azure Firewall SNAT ports with NAT Gateway for large workloads](https://azure.microsoft.com/blog/scale-azure-firewall-snat-ports-with-nat-gateway-for-large-workloads/).
- [Design virtual networks with NAT gateway](../virtual-network/nat-gateway/nat-gateway-resource.md) - [Integrate NAT gateway with Azure Firewall in a hub and spoke network](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md)
firewall Policy Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-analytics.md
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
## Next steps
+- To learn more about Policy Analytics, see [Optimize performance and strengthen security with Policy Analytics for Azure Firewall](https://azure.microsoft.com/blog/optimize-performance-and-strengthen-security-with-policy-analytics-for-azure-firewall/).
- To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md). - To learn more about Azure Firewall structured logs, see [Azure Firewall structured logs](firewall-structured-logs.md).
firewall Premium Deploy Certificates Enterprise Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy-certificates-enterprise-ca.md
To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre
## Next steps
-[Azure Firewall Premium in the Azure portal](premium-portal.md)
+- [Azure Firewall Premium in the Azure portal](premium-portal.md)
+- [Building a POC for TLS inspection in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723)
+
firewall Premium Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy.md
Let's create an application rule to allow access to sports web sites.
## Next steps
+- [Building a POC for TLS inspection in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723)
- [Azure Firewall Premium in the Azure portal](premium-portal.md)
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
As a result, there's no need to create an explicit deny rule from VNet-B to VNet
## Next steps -- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
+- [Learn more about Azure Firewall NAT behaviors](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-nat-behaviors/ba-p/3825834)
+- [Learn how to deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md)
- [Learn more about Azure network security](../networking/security/index.yml)
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md
You can use the Azure portal to specify private IP address ranges for the firewa
## Auto-learn SNAT routes (preview)
-You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network and hence traffic to destinations in the learned ranges aren't SNATed. Configure auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The Firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use JSON, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes.
+You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network, so traffic to destinations in the learned ranges aren't SNATed. Auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use an ARM template, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes.
-### Configure using JSON
+### Configure using an ARM template
You can use the following JSON to configure auto-learn. Azure Firewall must be associated with an Azure Route Server.
Use the following JSON to associate an Azure Route Server:
You can use the portal to associate a Route Server with Azure Firewall to configure auto-learn SNAT routes (preview).
-1. Select your resource group, and then select your firewall.
-2. Select **Overview**.
-3. Add a Route Server.
+Use the portal to complete the following tasks:
-Review learned routes:
-
-1. Select your resource group, and then select your firewall.
-2. Select **Learned SNAT IP Prefixes (preview)** in the **Settings** column.
+- Add a subnet named **RouteServerSubnet** to your existing firewall VNet. The size of the subnet should be at least /27.
+- Deploy a Route Server into the existing firewall VNet. For information about Azure Route Server, see [Quickstart: Create and configure Route Server using the Azure portal](../route-server/quickstart-configure-route-server-portal.md).
+- Add the route server on the firewall **Learned SNAT IP Prefixes (preview)** page.
+ :::image type="content" source="media/snat-private-range/add-route-server.png" alt-text="Screenshot showing firewall add a route server." lightbox="media/snat-private-range/add-route-server.png":::
+- Modify your firewall policy to enable **Auto-learn IP prefixes (preview)** in the **Private IP ranges (SNAT)** section.
+ :::image type="content" source="media/snat-private-range/auto-learn.png" alt-text="Screenshot showing firewall policy Private IP ranges (SNAT) settings." lightbox="media/snat-private-range/auto-learn.png":::
+- You can see the learned routes on the **Learned SNAT IP Prefixes (preview)** page.
## Next steps
governance Attestation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md
Attestations are used by Azure Policy to set compliance states of resources or scopes targeted by [manual policies](effects.md#manual). They also allow users to provide additional metadata or link to evidence which accompanies the attested compliance state. > [!NOTE]
-> Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations).
+> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policy/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation).
## Best practices
governance Compliance States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md
An applicable resource has a compliance state of exempt for a policy assignment
> [!NOTE] > _Exempt_ is different than _excluded_. For more details, see [scope](./scope.md).
-### Unknown (preview)
+### Unknown
Unknown is the default compliance state for definitions with `manual` effect, unless the default has been explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect.
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Previously updated : 08/29/2022 Last updated : 08/15/2023 + # Azure Policy definition structure Azure Policy establishes conventions for resources. Policy definitions describe resource compliance
always stay the same, however their values change based on the individual fillin
Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values.
-> [!NOTE]
-> Parameters may be added to an existing and assigned definition. The new parameter must include the
-> **defaultValue** property. This prevents existing assignments of the policy or initiative from
-> indirectly being made invalid.
+Parameters may be added to an existing and assigned definition. The new parameter must include the
+**defaultValue** property. This prevents existing assignments of the policy or initiative from
+indirectly being made invalid.
-> [!NOTE]
-> Parameters can't be removed from a policy definition that's been assigned.
+Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Instead of removing, you can classify the parameter as deprecated in the parameter metadata.
### Parameter properties
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
## Interchanging effects
-Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties.
+Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies assess a resource's compliance based on a child or extension resource's properties.
The following list is some general guidance around interchangeable effects: - **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable.
related resources to match.
- When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource. However, an [audit](#audit) effect should be considered instead.+
+> [!NOTE]
+>
+> **Type** and **Name** segments can be combined to generically retrieve nested resources.
+>
+> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`.
+>
+> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`."
+ - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
assignment.
#### Subscription deletion
-Policy won't block removal of resources that happens during a subscription deletion.
+Policy doesn't block removal of resources that happens during a subscription deletion.
#### Resource group deletion
-Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`.
+Policy evaluates resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule block a resource group deletion. Policy doesn't block removal of resources that don't support location and tags nor any policy with `mode:all`.
#### Cascade deletion
-Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child).
+Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy doesn't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child).
[!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)]
related resources to match and the template deployment to execute.
resource instead of all resources of the specified type. - When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource.+
+> [!NOTE]
+>
+> **Type** and **Name** segments can be combined to generically retrieve nested resources.
+>
+> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`.
+>
+> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`."
+ - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
logs, and the policy effect don't occur. For more information, see
## Manual
-The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
+The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
> [!NOTE] > Support for manual policy is available through various Microsoft Defender
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
In every query response, Azure Resource Graph adds two throttling headers:
- `x-ms-user-quota-resets-after` (hh:mm:ss): The time duration until a user's quota consumption is reset.
-When a security principal has access to more than 5,000 subscriptions within the tenant or
+When a security principal has access to more than 10,000 subscriptions within the tenant or
management group [query scope](./query-language.md#query-scope), the response is limited to the
-first 5,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`.
+first 10,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`.
To illustrate how the headers work, let's look at a query response that has the header and values of `x-ms-user-quota-remaining: 10` and `x-ms-user-quota-resets-after: 00:00:03`.
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md
resources.
The list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API `managementGroups` property takes the management group ID, which is different from the name of the management group. When `managementGroups` is specified,
-resources from the first 5,000 subscriptions in or under the specified management group hierarchy
+resources from the first 10,000 subscriptions in or under the specified management group hierarchy
are included. `managementGroups` can't be used at the same time as `subscriptions`. Example: Query all resources within the hierarchy of the management group named `My Management
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Title: Get resource changes
+ Title: Get resource configuration changes
description: Get resource configuration changes at scale Previously updated : 06/16/2022 Last updated : 08/17/2023
Resources change through the course of daily use, reconfiguration, and even redeployment. Most change is by design, but sometimes it isn't. You can: -- Find when changes were detected on an Azure Resource Manager property-- View property change details-- Query changes at scale across your subscriptions, management group, or tenant
+- Find when changes were detected on an Azure Resource Manager property.
+- View property change details.
+- Query changes at scale across your subscriptions, management group, or tenant.
This article shows how to query resource configuration changes through Resource Graph. - ## Prerequisites - To enable Azure PowerShell to query Azure Resource Graph, [add the module](../first-query-powershell.md#add-the-resource-graph-module).
Each change resource has the following properties:
| `targetResourceId` | The resourceID of the resource on which the change occurred. | ||| | `targetResourceType` | The resource type of the resource on which the change occurred. |
-| `changeType` | Describes the type of change detected for the entire change record. Values are: Create, Update, and Delete. The **changes** property dictionary is only included when `changeType` is _Update_. For the delete case, the change resource will still be maintained as an extension of the deleted resource for 14 days, even if the entire Resource group has been deleted. The change resource won't block deletions or impact any existing delete behavior. |
+| `changeType` | Describes the type of change detected for the entire change record. Values are: Create, Update, and Delete. The **changes** property dictionary is only included when `changeType` is _Update_. For the delete case, the change resource is maintained as an extension of the deleted resource for 14 days, even if the entire resource group was deleted. The change resource doesn't block deletions or affect any existing delete behavior. |
| `changes` | Dictionary of the resource properties (with property name as the key) that were updated as part of the change: | | `propertyChangeType` | This property is deprecated and can be derived as follows `previousValue` being empty indicates Insert, empty `newValue` indicates Remove, when both are present, it's Update.| | `previousValue` | The value of the resource property in the previous snapshot. Value is empty when `changeType` is _Insert_. |
-| `newValue` | The value of the resource property in the new snapshot. This property will be empty (absent) when `changeType` is _Remove_. |
-| `changeCategory` | This property was optional and has been deprecated, this field will no longer be available|
+| `newValue` | The value of the resource property in the new snapshot. This property is empty (absent) when `changeType` is _Remove_. |
+| `changeCategory` | This property was optional and has been deprecated, this field is no longer available. |
| `changeAttributes` | Array of metadata related to the change: | | `changesCount` | The number of properties changed as part of this change record. |
-| `correlationId` | Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template will share the same correlation ID. |
+| `correlationId` | Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template share the same correlation ID. |
| `timestamp` | The datetime of when the change was detected. | | `previousResourceSnapshotId` | Contains the ID of the resource snapshot that was used as the previous state of the resource. | | `newResourceSnapshotId` | Contains the ID of the resource snapshot that was used as the new state of the resource. |
-| `isTruncated` | When the number of property changes reaches beyond a certain number they're truncated and this property becomes present. |
+| `isTruncated` | When the number of property changes reaches beyond a certain number, they're truncated and this property becomes present. |
## Get change events using Resource Graph
resourcechangesΓÇ»
### Best practices -- Query for change events during a specific window of time and evaluate the change details. This query works best during incident management to understand _potentially_ related changes.
+- Query for change events during a specific window of time and evaluate the change details. This query works best during incident management to understand _potentially_ related changes.
- Keep a Configuration Management Database (CMDB) up to date. Instead of refreshing all resources and their full property sets on a scheduled frequency, only get what changed. - Understand what other properties may have been changed when a resource changed compliance state. Evaluation of these extra properties can provide insights into other properties that may need to be managed via an Azure Policy definition.-- The order of query commands is important. In this example, the `order by` must come before the `limit` command. This command order first orders the query results by the change time and then limits them to ensure that you get the five most recent results.-- Resource configuration changes supports changes to resource types from the [Resources table](../reference/supported-tables-resources.md#resources), `resourcecontainers` and `healthresources` table in Resource Graph. Changes are queryable for 14 days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query results to any of the Azure data stores (such as [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your desired retention.
+- The order of query commands is important. In this example, the `order by` must come before the `limit` command. This command orders the query results by the change time and then limits them to ensure that you get the five most recent results.
+- Resource configuration changes support changes to resource types from the Resource Graph tables [resources](../reference/supported-tables-resources.md#resources), [resourcecontainers](../reference/supported-tables-resources.md#resourcecontainers), and [healthresources](../reference/supported-tables-resources.md#healthresources). Changes are queryable for 14 days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query results to any of the Azure data stores like [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your desired retention.
## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 06/15/2022 Last updated : 08/15/2023
provide the following abilities:
- Query resources with complex filtering, grouping, and sorting by resource properties. - Explore resources iteratively based on governance requirements. - Assess the impact of applying policies in a vast cloud environment.-- [Query changes made to resource properties](./how-to/get-resource-changes.md)
- (preview).
+- [Query changes made to resource properties](./how-to/get-resource-changes.md).
-In this documentation, you'll go over each feature in detail.
+In this documentation, you review each feature in detail.
> [!NOTE] > Azure Resource Graph powers Azure portal's search bar, the new browse **All resources** experience,
With Azure Resource Graph, you can:
- Access the properties returned by resource providers without needing to make individual calls to each resource provider.-- View the last seven days of resource configuration changes to see what properties changed and
- when. (preview)
+- View the last 14 days of resource configuration changes to see which properties changed and
+ when.
> [!NOTE] > As a _preview_ feature, some `type` objects have additional non-Resource Manager properties
First, for details on operations and functions that can be used with Azure Resou
## Permissions in Azure Resource Graph
-To use Resource Graph, you must have appropriate rights in [Azure role-based access
-control (Azure RBAC)](../../role-based-access-control/overview.md) with at least read access to the
-resources you want to query. Without at least `read` permissions to the Azure object or object
-group, results won't be returned.
+To use Resource Graph, you must have appropriate rights in [Azure role-based access control (Azure
+RBAC)](../../role-based-access-control/overview.md) with at least `read` access to the resources you
+want to query. No results are returned if you don't have at least `read` permissions to the Azure
+object or object group.
> [!NOTE] > Resource Graph uses the subscriptions available to a principal during login. To see resources of a > new subscription added during an active session, the principal must refresh the context. This > action happens automatically when logging out and back in.
-Azure CLI and Azure PowerShell use subscriptions that the user has access to. When using REST API
-directly, the subscription list is provided by the user. If the user has access to any of the
+Azure CLI and Azure PowerShell use subscriptions that the user has access to. When you use a REST
+API, the subscription list is provided by the user. If the user has access to any of the
subscriptions in the list, the query results are returned for the subscriptions the user has access
-to. This behavior is the same as when calling
-[Resource Groups - List](/rest/api/resources/resourcegroups/list) \- you get resource groups you've
-access to without any indication that the result may be partial. If there are no subscriptions in
-the subscription list that the user has appropriate rights to, the response is a _403_ (Forbidden).
+to. This behavior is the same as when calling [Resource Groups - List](/rest/api/resources/resourcegroups/list)
+because you get resource groups that you can access, without any indication that the result may be
+partial. If there are no subscriptions in the subscription list that the user has appropriate rights
+to, the response is a _403_ (Forbidden).
> [!NOTE] > In the **preview** REST API version `2020-04-01-preview`, the subscription list may be omitted.
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
Title: Azure HDInsight architecture with Enterprise Security Package
description: Learn how to plan Azure HDInsight security with Enterprise Security Package. -+ Last updated 05/11/2023
hdinsight Apache Ambari Troubleshoot Metricservice Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-metricservice-issues.md
Previously updated : 07/21/2022 Last updated : 08/21/2023 # Apache Ambari Metrics Collector issues in Azure HDInsight
java.lang.OutOfMemoryError: Java heap space
2021-04-13 05:57:37,546 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times. ```
-2. Get the Apache Ambari Metrics Collector pid and check GC performance
+2. Get the Apache Ambari Metrics Collector `pid` and check GC performance
``` ps -fu ams | grep 'org.apache.ambari.metrics.AMSApplicationServer'
hdinsight Apache Hadoop Use Sqoop Mac Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md
description: Learn how to use Apache Sqoop to import and export between Apache H
Previously updated : 07/18/2022 Last updated : 08/21/2023 # Use Apache Sqoop to import and export data between Apache Hadoop on HDInsight and Azure SQL Database
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-azure-cli.md
Previously updated : 07/21/2022 Last updated : 08/21/2023 # Create a cluster with Data Lake Storage Gen2 using Azure CLI
hdinsight Apache Hive Warehouse Connector Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-operations.md
Previously updated : 07/22/2022 Last updated : 08/21/2023 # Apache Spark operations supported by Hive Warehouse Connector in Azure HDInsight
hdinsight Apache Hive Warehouse Connector Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md
Last updated 07/18/2022
HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we'll focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector. > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
## Prerequisite
hdinsight Apache Kafka Mirror Maker 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirror-maker-2.md
This architecture features two clusters in different resource groups and virtual
vi /etc/kafka/conf/connect-mirror-maker.properties ``` > [!NOTE]
- > This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+ > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
1. Property file looks like this. ```
hdinsight Apache Kafka Mirroring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirroring.md
Configure IP advertising to enable a client to connect by using broker IP addres
## Start MirrorMaker > [!NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
1. From the SSH connection to the secondary cluster, use the following command to start the MirrorMaker process:
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Title: Disaster recovery for Azure API for FHIR description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.-+ Last updated 06/03/2022-+ # Disaster recovery for Azure API for FHIR
-Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
+Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR.
The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes.
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
+ Last updated 06/03/2022
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using AHDS Samples OSS
+## SMART on FHIR using AHDS Samples OSS (SMART on FHIR(Enhanced))
### Step 1: Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
+> This is another option to SMART on FHIR(Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
- Title: DICOMcast overview - Azure Health Data Services
-description: In this article, you'll learn the concepts of DICOMcast.
---- Previously updated : 06/03/2022---
-# DICOMcast overview
-
-> [!NOTE]
-> On **July 31, 2023** DICOMcast will be retired. DICOMcast will continue to be available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [migration guidance](https://aka.ms/dicomcast-migration).
-
-DICOMcast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOMcast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning.
-
-## Architecture
-
-[ ![Architecture diagram of DICOMcast](media/dicom-cast-architecture.png) ](media/dicom-cast-architecture.png#lightbox)
--
-1. **Poll for batch of changes**: DICOMcast polls for any changes via the [Change Feed](dicom-change-feed-overview.md), which captures any changes that occur in your Medical Imaging Server for DICOM.
-1. **Fetch corresponding FHIR resources, if any**: If any DICOM service changes and correspond to FHIR resources, DICOMcast will fetch the related FHIR resources. DICOMcast synchronizes DICOM tags to the FHIR resource types *Patient* and *ImagingStudy*.
-1. **Merge FHIR resources and 'PUT' as a bundle in a transaction**: The FHIR resources corresponding to the DICOMcast captured changes will be merged. The FHIR resources will be 'PUT' as a bundle in a transaction into your FHIR service.
-1. **Persist state and process next batch**: DICOMcast will then persist the current state to prepare for next batch of changes.
-
-The current implementation of DICOMcast:
--- Supports a single-threaded process that reads from the DICOM change feed and writes to a FHIR service.-- Is hosted by Azure Container Instance in our sample template, but can be run elsewhere.-- Synchronizes DICOM tags to *Patient* and *ImagingStudy* FHIR resource types*.-- Is configurated to ignore invalid tags when syncing data from the change feed to FHIR resource types.
- - If `EnforceValidationOfTagValues` is enabled, then the change feed entry won't be written to the FHIR service unless every tag that's mapped is valid. For more information, see the [Mappings](#mappings) section below.
- - If `EnforceValidationOfTagValues` is disabled (default), and if a value is invalid, but it's not required to be mapped, then that particular tag won't be mapped. The rest of the change feed entry will be mapped to FHIR resources. If a required tag is invalid, then the change feed entry won't be written to the FHIR service. For more information about the required tags, see [Patient](#patient) and [Imaging Study](#imagingstudy)
-- Logs errors to Azure Table Storage.
- - Errors occur when processing change feed entries that are persisted in Azure Table storage that are in different tables.
- - `InvalidDicomTagExceptionTable`: Stores information about tags with invalid values. Entries here don't necessarily mean that the entire change feed entry wasn't stored in FHIR service, but that the particular value had a validation issue.
- - `DicomFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the change feed entry (such as invalid required tag). All entries in this table weren't stored to FHIR service.
- - `FhirFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the FHIR service (such as conflicting resource already exists). All entries in this table weren't stored to FHIR service.
- - `TransientRetryExceptionTable`: Stores information about change feed entries that faced a transient error (such as FHIR service too busy) and are being retried. Entries in this table note how many times they've been retried, but it doesn't necessarily mean that they eventually failed or succeeded to store to FHIR service.
- - `TransientFailureExceptionTable`: Stores information about change feed entries that had a transient error, and went through the retry policy and still failed to store to FHIR service. All entries in this table failed to store to FHIR service.
-
-## Mappings
-
-The current implementation of DICOMcast has the following mappings:
-
-### Patient
-
-| Property | Tag ID | Tag Name | Required Tag?| Note |
-| :- | :-- | :- | :-- | :-- |
-| Patient.identifier.where(system = '') | (0010,0020) | PatientID | Yes | For now, the system will be empty string. We'll add support later for allowing the system to be specified. |
-| Patient.name.where(use = 'usual') | (0010,0010) | PatientName | No | PatientName will be split into components and added as HumanName to the Patient resource. |
-| Patient.gender | (0010,0040) | PatientSex | No | |
-| Patient.birthDate | (0010,0030) | PatientBirthDate | No | PatientBirthDate only contains the date. This implementation assumes that the FHIR and DICOM services have data from the same time zone. |
-
-### Endpoint
-
-| Property | Tag ID | Tag Name | Note |
-| :- | :-- | :- | : |
-| Endpoint.status ||| The value 'active' will be used when creating the endpoint. |
-| Endpoint.connectionType ||| The system 'http://terminology.hl7.org/CodeSystem/endpoint-connection-type' and value 'dicom-wado-rs' will be used when creating the endpoint. |
-| Endpoint.address ||| The root URL to the DICOMWeb service will be used when creating the endpoint. The rule is described in 'http://hl7.org/fhir/imagingstudy.html#endpoint'. |
-
-### ImagingStudy
-
-| Property | Tag ID | Tag Name | Required | Note |
-| :- | :-- | :- | : | : |
-| ImagingStudy.identifier.where(system = 'urn:dicom:uid') | (0020,000D) | StudyInstanceUID | Yes | The value will have prefix of `urn:oid:`. |
-| ImagingStudy.status | | | No | The value 'available' will be used when creating ImagingStudy. |
-| ImagingStudy.modality | (0008,0060) | Modality | No | |
-| ImagingStudy.subject | | | No | It will be linked to the [Patient](#mappings). |
-| ImagingStudy.started | (0008,0020), (0008,0030), (0008,0201) | StudyDate, StudyTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
-| ImagingStudy.endpoint | | | | It will be linked to the [Endpoint](#endpoint). |
-| ImagingStudy.note | (0008,1030) | StudyDescription | No | |
-| ImagingStudy.series.uid | (0020,000E) | SeriesInstanceUID | Yes | |
-| ImagingStudy.series.number | (0020,0011) | SeriesNumber | No | |
-| ImagingStudy.series.modality | (0008,0060) | Modality | Yes | |
-| ImagingStudy.series.description | (0008,103E) | SeriesDescription | No | |
-| ImagingStudy.series.started | (0008,0021), (0008,0031), (0008,0201) | SeriesDate, SeriesTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
-| ImagingStudy.series.instance.uid | (0008,0018) | SOPInstanceUID | Yes | |
-| ImagingStudy.series.instance.sopClass | (0008,0016) | SOPClassUID | Yes | |
-| ImagingStudy.series.instance.number | (0020,0013) | InstanceNumber | No| |
-| ImagingStudy.identifier.where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='ACSN')) | (0008,0050) | Accession Number | No | Refer to http://hl7.org/fhir/imagingstudy.html#notes. |
-
-### Timestamp
-
-DICOM has different date time VR types. Some tags (like Study and Series) have the date, time, and UTC offset stored separately. This means that the date might be partial. This code attempts to translate this into a partial date syntax allowed by the FHIR service.
-
-## Summary
-
-In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [deployment instructions](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md).
-
-> [!IMPORTANT]
-> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
-
-
-## Next steps
-
-To get started using the DICOM service, see
-
->[!div class="nextstepaction"]
->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
-
->[!div class="nextstepaction"]
->[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
This article describes our open-source projects on GitHub that provide source co
* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, non-diagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. ### Medical imaging network demo environment
-* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-prem radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
+* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
## Next steps
For more information about using the DICOM service, see
For more information about DICOM cast, see >[!div class="nextstepaction"]
->[DICOM cast overview](dicom-cast-overview.md)
+>[DICOM cast overview](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md)
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Consume Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md
Follow these steps to create a Logic App workflow to consume FHIR events:
## Prerequisites
-Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](events-deploy-portal.md).
+Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy events using the Azure portal](events-deploy-portal.md).
## Creating a Logic App
You now need to fill out the details of your Logic App. Specify information for
:::image type="content" source="media/events-logic-apps/events-logic-tabs.png" alt-text="Screenshot of the five tabs for specifying your Logic App." lightbox="media/events-logic-apps/events-logic-tabs.png"::: -- Tab 1 - Basics-- Tab 2 - Hosting-- Tab 3 - Monitoring-- Tab 4 - Tags-- Tab 5 - Review + Create
+- Tab 1 - **Basics**
+- Tab 2 - **Hosting**
+- Tab 3 - **Monitoring**
+- Tab 4 - **Tags**
+- Tab 5 - **Review + Create**
### Basics - Tab 1
Enabling your plan makes it zone redundant.
### Hosting - Tab 2
-Continue specifying your Logic App by clicking "Next: Hosting".
+Continue specifying your Logic App by selecting **Next: Hosting**.
#### Storage
Choose the type of storage you want to use and the storage account. You can use
### Monitoring - Tab 3
-Continue specifying your Logic App by clicking "Next: Monitoring".
+Continue specifying your Logic App by selecting **Next: Monitoring**.
#### Monitoring with Application Insights
Enable Azure Monitor Application Insights to automatically monitor your applicat
### Tags - Tab 4
-Continue specifying your Logic App by clicking **Next: Tags**.
+Continue specifying your Logic App by selecting **Next: Tags**.
#### Use tags to categorize resources
This example doesn't use tagging.
### Review + create - Tab 5
-Finish specifying your Logic App by clicking **Next: Review + create**.
+Finish specifying your Logic App by selecting **Next: Review + create**.
#### Review your Logic App
If there are no errors, you'll finally see a notification telling you that your
#### Your Logic App dashboard
-Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard:
+Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by selecting **Overview** in the Logic App menu. Here's a Logic App dashboard:
:::image type="content" source="media/events-logic-apps/events-logic-overview.png" alt-text="Screenshot of your Logic Apps overview dashboard." lightbox="media/events-logic-apps/events-logic-overview.png":::
To set up a new workflow, fill in these details:
Specify a new name for your workflow. Indicate whether you want the workflow to be stateful or stateless. Stateful is for business processes and stateless is for processing IoT events.
-When you've specified the details, select "Create" to begin designing your workflow.
+When you've specified the details, select **Create** to begin designing your workflow.
### Designing the workflow In your new workflow, select the name of the enabled workflow.
-You can write code to design a workflow for your application, but for this tutorial, choose the Designer option on the Developer menu.
+You can write code to design a workflow for your application, but for this tutorial, choose the **Designer** option on the **Developer** menu.
-Next, select "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and select the "Azure" tab below. The Event Grid isn't a Logic App Built-in.
+Next, select **Choose an operation** to display the **Add a Trigger** blade on the right. Then search for "Azure Event Grid" and select the **Azure** tab below. The Event Grid isn't a Logic App Built-in.
:::image type="content" source="media/events-logic-apps/events-logic-grid.png" alt-text="Screenshot of the search results for Azure Event Grid." lightbox="media/events-logic-apps/events-logic-grid.png":::
-When you see the "Azure Event Grid" icon, select on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
+When you see the "Azure Event Grid" icon, select on it to display the **Triggers and Actions** available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
-Select "When a resource event occurs" to set up a trigger for the Azure Event Grid.
+Select **When a resource event occurs** to set up a trigger for the Azure Event Grid.
To tell Event Grid how to respond to the trigger, you must specify parameters and add actions.
Fill in the details for subscription, resource type, and resource name. Then you
- Resource deleted - Resource updated
-For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md#what-fhir-resource-events-does-events-support).
+For more information about supported event types, see [Frequently asked questions about events](events-faqs.md).
### Adding an HTTP action
-Once you've specified the trigger events, you must add more details. Select the "+" below the "When a resource event occurs" button.
+Once you've specified the trigger events, you must add more details. Select the **+** below the **When a resource event occurs** button.
-You need to add a specific action. Select "Choose an operation" to continue. Then, for the operation, search for "HTTP" and select on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
+You need to add a specific action. Select **Choose an operation** to continue. Then, for the operation, search for "HTTP" and select on **Built-in** to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
The options in this example are:
The options in this example are:
At this point, you need to give the FHIR Reader access to your app, so it can verify that the event details are correct. Follow these steps to give it access:
-1. The first step is to go back to your Logic App and select the Identity menu item.
+1. The first step is to go back to your Logic App and select the **Identity** menu item.
-2. In the System assigned tab, make sure the Status is "On".
+2. In the System assigned tab, make sure the **Status** is "On".
-3. Select on Azure role assignments. Select "Add role assignment".
+3. Select on Azure role assignments. Select **Add role assignment**.
4. Specify the following options:
At this point, you need to give the FHIR Reader access to your app, so it can ve
- Subscription = your subscription - Role = FHIR Data Reader.
-When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, select "Review + assign" to assign the role.
+When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by selecting the name and then selecting the **Select** button. Finally, select **Review + assign** to assign the role.
### Add a condition
-After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Select on "Built-in" to display the Control icon. Next select Actions and choose Condition.
+After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the **+** below HTTP to "Choose an operation". On the right, search for the word "condition". Select on **Built-in** to display the Control icon. Next select **Actions** and choose **Condition**.
When the condition is ready, you can specify what actions happen if the condition is true or false. ### Choosing a condition criteria
-In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on **Condition** in the workflow. A set of condition choices are then displayed.
+In order to specify whether you want to take action for the specific event, begin specifying the criteria by selecting on **Condition** in the workflow. A set of condition choices are then displayed.
Under the **And** box, add these two conditions:
The expression for getting the resourceType is `body('HTTP')?['resourceType']`.
#### Event Type
-You can select Event Type from the Dynamic Content.
+You can select **Event Type** from the Dynamic Content.
Here's an example of the Condition criteria:
When you've entered the condition criteria, save your workflow.
#### Workflow dashboard
-To check the status of your workflow, select Overview in the workflow menu. Here's a dashboard for a workflow:
+To check the status of your workflow, select **Overview** in the workflow menu. Here's a dashboard for a workflow:
:::image type="content" source="media/events-logic-apps/events-logic-dashboard.png" alt-text="Screenshot of the Logic App workflow dashboard." lightbox="media/events-logic-apps/events-logic-dashboard.png":::
You can do the following operations from your workflow dashboard:
### Condition testing
-Save your workflow by clicking the "Save" button.
+Save your workflow by selecting the **Save** button.
To test your new workflow, do the following steps:
In this tutorial, you learned how to use Logic Apps to process FHIR events.
To learn about Events, see > [!div class="nextstepaction"]
-> [What are Events?](events-overview.md)
+> [What are events?](events-overview.md)
To learn about the Events frequently asked questions (FAQs), see > [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+> [Frequently asked questions about events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
Title: Deploy Events using the Azure portal - Azure Health Data Services
-description: Learn how to deploy the Events feature using the Azure portal.
+ Title: Deploy events using the Azure portal - Azure Health Data Services
+description: Learn how to deploy the events feature using the Azure portal.
Last updated 06/23/2022
-# Quickstart: Deploy Events using the Azure portal
+# Quickstart: Deploy events using the Azure portal
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this quickstart, learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send FHIR and DICOM event messages.
+In this quickstart, learn how to deploy the events feature in the Azure portal to send FHIR and DICOM event messages.
## Prerequisites
-It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Health Data Services.
+It's important that you have the following prerequisites completed before you begin the steps of deploying the events feature.
* [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) * [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
It's important that you have the following prerequisites completed before you be
* [FHIR service deployed in the workspace](../fhir/fhir-portal-quickstart.md) or [DICOM service deployed in the workspace](../dicom/deploy-dicom-services-in-azure.md) > [!IMPORTANT]
-> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the Events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
> [!NOTE]
-> For the purposes of this quickstart, we'll be using a basic Events set up and an event hub as the endpoint for Events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
+> For the purposes of this quickstart, we'll be using a basic events set up and an event hub as the endpoint for events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
-## Deploy Events
+## Deploy events
-1. Browse to the workspace that contains the FHIR or DICOM service you want to send Events messages from and select the **Events** button on the left hand side of the portal.
+1. Browse to the workspace that contains the FHIR or DICOM service you want to send events messages from and select the **Events** button on the left hand side of the portal.
:::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png":::
It's important that you have the following prerequisites completed before you be
3. In the **Create Event Subscription** box, enter the following subscription information.
- * **Name**: Provide a name for your Events subscription.
- * **System Topic Name**: Provide a name for your System Topic.
+ * **Name**: Provide a name for your events subscription.
+ * **System Topic Name**: Provide a name for your system topic.
> [!NOTE]
- > The first time you set up the Events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional Events subscriptions that you create within the workspace.
+ > The first time you set up the events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional events subscriptions that you create within the workspace.
* **Event types**: Type of FHIR or DICOM events to send messages for (for example: create, updated, and deleted).
- * **Endpoint Details**: Endpoint to send Events messages to (for example: an Azure Event Hubs namespace + an event hub).
+ * **Endpoint Details**: Endpoint to send events messages to (for example: an Azure Event Hubs namespace + an event hub).
>[!NOTE] > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings at their default values.
It's important that you have the following prerequisites completed before you be
## Next steps
-In this quickstart, you learned how to deploy Events using the Azure portal.
+In this quickstart, you learned how to deploy events using the Azure portal.
-To learn how to enable the Events metrics, see
+To learn how to enable the events metrics, see
> [!div class="nextstepaction"]
-> [How to use Events metrics](events-use-metrics.md)
+> [How to use events metrics](events-use-metrics.md)
To learn how to export Event Grid system diagnostic logs and metrics, see > [!div class="nextstepaction"]
-> [How to enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+> [How to enable diagnostic settings for events](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: How to disable the Events feature and delete Azure Health Data Services workspaces - Azure Health Data Services
-description: Learn how to disable the Events feature and delete Azure Health Data Services workspaces.
+ Title: How to disable the events feature and delete Azure Health Data Services workspaces - Azure Health Data Services
+description: Learn how to disable the events feature and delete Azure Health Data Services workspaces.
Last updated 07/11/2023
-# How to disable the Events feature and delete Azure Health Data Services workspaces
+# How to disable the events feature and delete Azure Health Data Services workspaces
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to disable the Events feature and delete Azure Health Data Services workspaces.
+In this article, learn how to disable the events feature and delete Azure Health Data Services workspaces.
-## Disable Events
+## Disable events
-To disable Events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
+To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
1. Select the **Event Subscription** to be deleted. In this example, we select an Event Subscription named **fhir-events**.
To disable Events from sending event messages for a single **Event Subscription*
:::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png":::
-3. To completely disable Events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
+3. To completely disable events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
:::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png":::
To avoid errors and successfully delete workspaces, follow these steps and in th
## Next steps
-In this article, you learned how to disable the Events feature and delete workspaces.
+In this article, you learned how to disable the events feature and delete workspaces.
-To learn about how to troubleshoot Events, see
+To learn about how to troubleshoot events, see
> [!div class="nextstepaction"]
-> [Troubleshoot Events](events-troubleshooting-guide.md)
+> [Troubleshoot events](events-troubleshooting-guide.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-enable-diagnostic-settings.md
Title: Enable Events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
-description: Learn how to enable Events diagnostic settings for diagnostic logs and metrics exporting.
+ Title: Enable events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
+description: Learn how to enable events diagnostic settings for diagnostic logs and metrics exporting.
Last updated 06/23/2022
-# How to enable diagnostic settings for Events
+# How to enable diagnostic settings for events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to enable the Events diagnostic settings for Azure Event Grid system topics.
+In this article, learn how to enable the events diagnostic settings for Azure Event Grid system topics.
## Resources
In this article, learn how to enable the Events diagnostic settings for Azure Ev
|More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)| > [!NOTE]
-> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice.
+> It might take up to 15 minutes for the first events diagnostic logs and metrics to display in the destination of your choice.
## Next steps
-In this article, you learned how to enable diagnostic settings for Events.
+In this article, you learned how to enable diagnostic settings for events.
-To learn how to use Events metrics using the Azure portal, see
+To learn how to use events metrics using the Azure portal, see
> [!div class="nextstepaction"]
-> [How to use Events metrics](events-use-metrics.md)
+> [How to use events metrics](events-use-metrics.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Title: Frequently asked questions about Events - Azure Health Data Services
-description: Learn about the frequently asked questions about Events.
+ Title: Frequently asked questions about events - Azure Health Data Services
+description: Learn about the frequently asked questions about events.
Last updated 07/11/2023
-# Frequently asked questions about Events
+# Frequently asked questions about events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification. ## Events: The basics
-## Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
+## Can I use events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
-No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
+No. The Azure Health Data Services events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
-## What FHIR resource events does Events support?
+## What FHIR resource changes does events support?
Events are generated from the following FHIR service types:
Events are generated from the following FHIR service types:
For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-## Does Events support FHIR bundles?
+## Does events support FHIR bundles?
-Yes. The Events feature is designed to emit notifications of data changes at the FHIR resource level.
+Yes. The events feature is designed to emit notifications of data changes at the FHIR resource level.
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html) in the following ways:
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-
> [!NOTE] > Events are not sent in the sequence of the data operations in the FHIR bundle.
-## What DICOM image events does Events support?
+## What DICOM image changes does events support?
Events are generated from the following DICOM service types:
Events are generated from the following DICOM service types:
* **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
-## What is the payload of an Events message?
+## What is the payload of an events message?
-For a detailed description of the Events message structure and both required and nonrequired elements, see [Events troubleshooting guide](events-troubleshooting-guide.md).
+For a detailed description of the events message structure and both required and nonrequired elements, see [Events message structures](events-message-structure.md).
-## What is the throughput for the Events messages?
+## What is the throughput for the events messages?
The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
-## How am I charged for using Events?
+## How am I charged for using events?
-There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
+There are no extra charges for using [Azure Health Data Services events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
Yes. We recommend that you use different subscribers for each individual FHIR or
Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
-## What is the expected time to receive an Events message?
+## What is the expected time to receive an events message?
On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met.
-## Is it possible to receive duplicate Events messages?
+## Is it possible to receive duplicate events messages?
-Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
+Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate.
Generally, we recommend that developers ensure idempotency for the event subscri
[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
- [FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
+[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
+ [FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md) FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md
Title: Events message structure - Azure Health Data Services
-description: Learn about the Events message structures and required values.
+description: Learn about the events message structures and required values.
# Events message structures
-In this article, learn about the Events message structures, required and nonrequired elements, and see samples of Events message payloads.
+In this article, learn about the events message structures, required and nonrequired elements, and see samples of events message payloads.
> [!IMPORTANT]
-> Events currently supports only the following operations:
+> Events currently supports the following operations:
> > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
In this article, learn about the Events message structures, required and nonrequ
## Next steps
-In this article, you learned about the Events message structures.
+In this article, you learned about the events message structures.
-To learn how to deploy Events using the Azure portal, see
+To learn how to deploy events using the Azure portal, see
> [!div class="nextstepaction"]
-> [Deploy Events using the Azure portal](events-deploy-portal.md)
+> [Deploy events using the Azure portal](events-deploy-portal.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Title: What are Events? - Azure Health Data Services
-description: Learn about Events, its features, integrations, and next steps.
+ Title: What are events? - Azure Health Data Services
+description: Learn about events, its features, integrations, and next steps.
Last updated 07/11/2023
-# What are Events?
+# What are events?
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-Events are a notification and subscription feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
+Events are a subscription and notification feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
-When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
+When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the events feature sends notification messages to events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past resource changes or when the feature is turned off.
+> FHIR resource and DICOM image change data is only written and event messages are sent when the events feature is turned on. The event feature doesn't send messages on past resource changes or when the feature is turned off.
> [!TIP] > For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md) > [!IMPORTANT] > Events currently supports the following operations:
Events are designed to support growth and changes in healthcare technology needs
## Configurable
-Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune Events message delivery options.
+Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune events message delivery options.
> [!NOTE] > The advanced features come as part of the Event Grid service. ## Extensible
-Use Events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
+Use events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
## Secure
-Built on a platform that supports protected health information compliance with privacy, safety, and security in mind, the Events messages don't transmit sensitive data as part of the message payload.
+Events are built on a platform that supports protected health information compliance with privacy, safety, and security in mind.
-Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice.
+Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the events message receiving endpoints of your choice.
## Next steps
-To learn about deploying Events using the Azure portal, see
+To learn about deploying events using the Azure portal, see
> [!div class="nextstepaction"]
-> [Deploy Events using the Azure portal](./events-deploy-portal.md)
+> [Deploy events using the Azure portal](events-deploy-portal.md)
-To learn about the frequently asks questions (FAQs) about Events, see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](./events-faqs.md)
+To learn about troubleshooting events, see
-To learn about troubleshooting Events, see
+> [!div class="nextstepaction"]
+> [Troubleshoot events](events-troubleshooting-guide.md)
+To learn about the frequently asks questions (FAQs) about events, see
+
> [!div class="nextstepaction"]
-> [Troubleshoot Events](./events-troubleshooting-guide.md)
+> [Frequently asked questions about Events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md
Title: Troubleshoot Events - Azure Health Data Services
-description: Learn how to troubleshoot Events.
+ Title: Troubleshoot events - Azure Health Data Services
+description: Learn how to troubleshoot events.
Last updated 07/12/2023
-# Troubleshoot Events
+# Troubleshoot events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides resources for troubleshooting Events.
+This article provides resources to troubleshoot events.
> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off.
+> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off.
## Resources for troubleshooting > [!IMPORTANT]
-> Events currently supports only the following operations:
+> Events currently supports the following operations:
> > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
This article provides resources for troubleshooting Events.
### Events message structures
-Use this resource to learn about the Events message structures, required and nonrequired elements, and see sample Events messages:
-* [Events message structures](./events-message-structure.md)
+Use this resource to learn about the events message structures, required and nonrequired elements, and see sample Events messages:
+* [Events message structures](events-message-structure.md)
### How to's
-Use this resource to learn how to deploy Events in the Azure portal:
-* [Deploy Events using the Azure portal](./events-deploy-portal.md)
+Use this resource to learn how to deploy events in the Azure portal:
+* [Deploy events using the Azure portal](events-deploy-portal.md)
> [!IMPORTANT] > The Event Subscription requires access to whichever endpoint you chose to send Events messages to. For more information, see [Enable managed identity for a system topic](../../event-grid/enable-identity-system-topics.md).
-Use this resource to learn how to use Events metrics:
-* [How to use Events metrics](./events-display-metrics.md)
+Use this resource to learn how to use events metrics:
+* [How to use events metrics](events-display-metrics.md)
-Use this resource to learn how to enable diagnostic settings for Events:
-* [How to enable diagnostic settings for Events](./events-export-logs-metrics.md)
+Use this resource to learn how to enable diagnostic settings for events:
+* [How to enable diagnostic settings for events](events-export-logs-metrics.md)
## Contact support If you have a technical question about Events or if you have a support related issue, see [Create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) and complete the required fields under the **Problem description** tab. For more information about Azure support options, see [Azure support plans](https://azure.microsoft.com/support/options/#support-plans). ## Next steps
-In this article, you were provided resources for troubleshooting Events.
+In this article, you were provided resources for troubleshooting events.
-To learn about the frequently asked questions (FAQs) about Events, see
+To learn about the frequently asked questions (FAQs) about events, see
> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+> [Frequently asked questions about events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md
Title: Use Events metrics - Azure Health Data Services
-description: Learn how use Events metrics.
+ Title: Use events metrics - Azure Health Data Services
+description: Learn how use events metrics.
Last updated 07/11/2023
-# How to use Events metrics
+# How to use events metrics
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to use Events metrics using the Azure portal.
+In this article, learn how to use events metrics using the Azure portal.
> [!TIP] > To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md). > [!NOTE]
-> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the Events message endpoint.
+> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the events message endpoint.
## Use metrics
In this article, learn how to use Events metrics using the Azure portal.
:::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png":::
-4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages.
+4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events Subscription metrics pages.
:::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png"::: ## Next steps
-In this tutorial, you learned how to use Events metrics using the Azure portal.
+In this tutorial, you learned how to use events metrics using the Azure portal.
-To learn how to export Events Azure Event Grid system diagnostic logs and metrics, see
+To learn how to enable events diagnostic settings, see
> [!div class="nextstepaction"]
-> [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+> [Enable diagnostic settings for events](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
For more information, see [Supported FHIR features](fhir-features-supported.md).
FHIR service is our implementation of the FHIR specification that sits in the Azure Health Data Services, which allows you to have a FHIR service and a DICOM service within a single workspace. Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are: * FHIR service has a limit of 4 TB, and Azure API for FHIR supports more than 4 TB.
-* FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+* FHIR service support additional capabilities as
+** [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+** [Incremental Import](configure-import-data.md).
+** [Autoscaling](fhir-service-autoscale.md) is enabled by default.
* Azure API for FHIR has more platform features (such as customer managed keys, and cross region DR) that aren't yet available in FHIR service in Azure Health Data Services. ### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server?
SMART (Substitutable Medical Applications and Reusable Technology) on FHIR is a
### Does the FHIR service support SMART on FHIR?
-We have a basic SMART on FHIR proxy as part of the managed service. If this doesnΓÇÖt meet your needs, you can use the open-source FHIR proxy for more advanced SMART scenarios.
+Yes, SMART on FHIR capability is supported using [AHDS samples](https://aka.ms/azure-health-data-services-smart-on-fhir-sample). This is referred to as SMART on FHIR(Enhanced). SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg). For more information, visit [SMART on FHIR(Enhanced) Documentation](smart-on-fhir.md).
+ ### Can I create a custom FHIR resource?
There are two basic Delete types supported within the FHIR service. These are [D
### Can I perform health checks on FHIR service?
-To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is succesful.
-In case of errors, you will recieve error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
+To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is successful.
+In case of errors, you will receive error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
## Next steps
healthcare-apis Frequently Asked Questions Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/frequently-asked-questions-convert-data.md
Previously updated : 08/03/2023 Last updated : 08/18/2023
You can use the `$convert-data` endpoint as a component within an ETL (extract,
However, the `$convert-data` operation itself isn't an ETL pipeline.
-## How can I persist the data into the FHIR service?
+## Where can I find an example of an ETL pipeline that I can reference?
+
+There's an example published in the [Azure Data Factory Template Gallery](../../data-factory/solution-templates-introduction.md#template-gallery) named **Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2**. This template transforms HL7v2 messages read from an Azure Data Lake Storage (ADLS) Gen2 or an Azure Blob Storage account into the FHIR R4 format. It then persists the transformed FHIR bundle JSON file into an ADLS Gen2 or a Blob Storage account. Once youΓÇÖre in the Azure Data Factory Template Gallery, you can search for the template.
++
+> [!IMPORTANT]
+> The purpose of this template is to help you get started with an ETL pipeline. Any steps in this pipeline can be removed, added, edited, or customized to fit your needs.
+>
+> In a scenario with batch processing of HL7v2 messages, this template does not take sequencing into account. Post processing will be needed if sequencing is a requirement.
+
+## How can I persist the converted data into the FHIR service using Postman?
You can use the FHIR service's APIs to persist the converted data into the FHIR service by using `POST {{fhirUrl}}/{{FHIR resource type}}` with the request body containing the FHIR resource to be persisted in JSON format.
-* For more information about using Postman with the FHIR service, see [Access the Azure Health Data Services FHIR service using Postman](use-postman.md).
+For more information about using Postman with the FHIR service, see [Access the Azure Health Data Services FHIR service using Postman](use-postman.md).
## Is there a difference in the experience of the $convert-data endpoint in Azure API for FHIR versus in the Azure Health Data Services?
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR Enhanced using Azure Health Data Services Samples
+## SMART on FHIR using Azure Health Data Services Samples (SMART on FHIR (Enhanced))
### Step 1 : Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
### Step 2 : FHIR server integration with samples For integration with Azure Health Data Services samples, you would need to follow the steps in samples open source solution.
-**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This steps listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This step listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg) compliance, using Azure Active Directory as the identity provider workflow.
For integration with Azure Health Data Services samples, you would need to follo
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR Enhanced using AHDS Samples" mentioned above. We suggest you to adopt SMART on FHIR enhanced. SMART on FHIR Proxy option is legacy option.
-> SMART on FHIR enhanced version provides added capabilities than SMART on FHIR proxy. SMART on FHIR enhanced capability can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
+> This is another option to SMART on FHIR(Enhanced) using AHDS Samples mentioned above. We suggest you to adopt SMART on FHIR(Enhanced). SMART on FHIR Proxy option is legacy option.
+> SMART on FHIR(Enhanced) provides added capabilities than SMART on FHIR proxy. SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
-### Step 1 : Set admin consent for your client application
+### Step 1: Set admin consent for your client application
To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
Add the reply URL to the public client application that you created earlier for
<!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)>
-### Step 3 : Get a test patient
+### Step 3: Get a test patient
To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-### Step 4 : Download the SMART on FHIR app launcher
+### Step 4: Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-### Step 5 : Test the SMART on FHIR proxy
+### Step 5: Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Ensure the region for the new private endpoint is the same as the region for you
[![Screen image of the Azure portal Basics Tab.](media/private-link/private-link-basics.png)](media/private-link/private-link-basics.png#lightbox)
-For the resource type, search and select **Microsoft.HealthcareApis/services** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
+For the resource type, search and select **Microsoft.HealthcareApis/workspaces** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
[![Screen image of the Azure portal Resource tab.](media/private-link/private-link-resource.png)](media/private-link/private-link-resource.png#lightbox)
industry Get Sensor Data From Sensor Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-sensor-data-from-sensor-partner.md
Title: Get sensor data from the partners
description: This article describes how to get sensor data from partners. + Last updated 11/04/2019
industry Ingest Historical Telemetry Data In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md
Last updated 11/04/2019 -+ # Ingest historical telemetry data
internet-peering Peering Service Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/peering-service-partner-overview.md
+
+ Title: Azure Peering Service partner overview
+description: Learn how to become an Azure Peering Service partner.
++++ Last updated : 08/18/2023++
+# Azure Peering Service partner overview
+
+This article helps you understand how to become an Azure Peering Service partner. It also describes the different types of Peering Service connections and the monitoring platform. For more information about Azure Peering Service, see [Azure Peering Service overview](../peering-service/about.md)
+
+## Peering Service partner requirements
+
+To become a Peering Service partner, follow these technical requirements:
+
+- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public.
+- The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy.
+- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints.
+- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet.
+- The Peer MUST NOT terminate the peering on a device running a stateful firewall.
+- The Peer CANNOT have two local connections configured on the same router, as diversity is required.
+- The Peer CANNOT apply rate limiting to their connection.
+- The Peer CANNOT configure a local redundant connection as a backup connection. Backup connections must be in a different location than primary connections.
+- It's recommended to create Peering Service peerings in multiple locations so geo-redundancy can be achieved.
+- Primary, backup, and redundant sessions all must have the same bandwidth.
+- All infrastructure prefixes are registered in the Azure portal and advertised with community string 8075:8007.
+- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links.
+
+If you follow all of the requirements listed and would like to become a Peering Service partner, an agreement must be signed. Contact peeringservice@microsoft.com to get started.
+
+## Types of Peering Service connections
+
+To become a Peering Service partner, you must request a direct peering interconnect with Microsoft. They come in three types depending on your use case.
+
+- **AS8075** - A direct peering interconnect enabled for Peering Service made for Internet Service providers (ISPs)
+- **AS8075 (with Voice)** - A direct peering interconnect enabled for Peering Service made for Internet Service providers (ISPs). This type is optimized for communications services (messaging, conferencing, etc.), and allows you to integrate your communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
+- **AS8075 (with exchange route server)** - A direct peering interconnect enabled for Peering Service and made for Internet Exchange providers (IXPs) who require a route server.
+
+### Monitoring platform
+
+Service monitoring is offered to analyze user traffic and routing. Metrics are available in the Azure portal to track the performance and availability of your Peering Service connection. For more information, see [Peering Service monitoring platform](../peering-service/about.md#monitoring-platform)
+
+In addition, Peering Service partners are able to see received routes reported in the Azure portal.
++
+## Next steps
+
+- To establish a Direct interconnect for Peering Service, see [Internet peering for Peering Service walkthrough](walkthrough-peering-service-all.md).
+- To establish a Direct interconnect for Peering Service Voice, see [Internet peering for Peering Service Voice walkthrough](walkthrough-communications-services-partner.md).
+- To establish a Direct interconnect for Communications Exchange with Route Server, see [Internet peering for MAPS Exchange with Route Server walkthrough](walkthrough-exchange-route-server-partner.md).
internet-peering Walkthrough Communications Services Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md
Title: Internet peering for Peering Service Voice Services walkthrough
+ Title: Internet peering for Peering Service Voice walkthrough
description: Learn about Internet peering for Peering Service Voice Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix.
Last updated 08/09/2023
-# Internet peering for Peering Service Voice Services walkthrough
+# Internet peering for Peering Service Voice walkthrough
-In this article, you learn steps to establish a Peering Service interconnect between a voice services provider and Microsoft.
+In this article, you learn how to establish a Peering Service interconnect between a voice services provider and Microsoft.
**Voice Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams.
internet-peering Walkthrough Peering Service All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md
To establish direct interconnect for Peering Service, follow these requirements:
## Establish Direct Interconnect for Peering Service
+Ensure that you sign a Microsoft Azure Peering Service agreement before proceeding. For more information, see [Azure Peering Service partner overview requirements](./peering-service-partner-overview.md#peering-service-partner-requirements).
+ To establish a Peering Service interconnect with Microsoft, follow the following steps: ### 1. Associate your public ASN with your Azure subscription
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
The following screenshot shows how the successful command response displays in t
:::image type="content" source="media/howto-use-commands/simple-command-ui.png" alt-text="Screenshot showing how to view command payload for a standard command." lightbox="media/howto-use-commands/simple-command-ui.png":::
+> [!NOTE]
+> For standard commands, there's a timeout of 30 seconds. If a device doesn't respond within 30 seconds, IoT Central assumes that the command failed. This timeout period isn't configurable.
+ ## Long-running commands In a long-running command, a device doesn't immediately complete the command. Instead, the device acknowledges receipt of the command and then later confirms that the command completed. This approach lets a device complete a long-running operation without keeping the connection to IoT Central open.
iot-develop Quickstart Devkit Stm B L475e Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md
Title: Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub quickstart
-description: Use Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry.
+ Title: Quickstart - Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub
+description: A quickstart that uses Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry.
ms.devlang: c Last updated 06/27/2023
+# CustomerIntent: As an embedded device developer, I want to use Azure RTOS to connect my device to Azure IoT Hub, so that I can learn about device connectivity and development.
# Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub
For debugging the application, see [Debugging with Visual Studio Code](https://g
[!INCLUDE [iot-develop-cleanup-resources](../../includes/iot-develop-cleanup-resources.md)]
-## Next steps
+## Next step
In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device.
iot-edge Debug Module Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/debug-module-vs-code.md
Select **Start Debugging** or select **F5**. Select the process to attach to. In
The Docker and Moby engines support SSH connections to containers allowing you to debug in Visual Studio Code connected to a remote device. You need to meet the following prerequisites before you can use this feature.
+Remote SSH debugging prerequisites may be different depending on the language you are using. The following sections describe the setup for .NET. For information on other languages, see [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) for an overview. Details about how to configure remote debugging are included in debugging sections for each language in the Visual Studio Code documentation.
+ ### Configure Docker SSH tunneling 1. Follow the steps in [Docker SSH tunneling](https://code.visualstudio.com/docs/containers/ssh#_set-up-ssh-tunneling) to configure SSH tunneling on your development computer. SSH tunneling requires public/private key pair authentication and a Docker context defining the remote device endpoint.
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Title: Manage IoT Edge certificates+ description: How to install and manage certificates on an Azure IoT Edge device to prepare for production deployment.
All IoT Edge devices use certificates to create secure connections between the runtime and any modules running on the device. IoT Edge devices functioning as gateways use these same certificates to connect to their downstream devices, too. > [!NOTE]
-> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You do not need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. In many cases, it's actually an intermediate CA certificate.
+> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You don't need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. Often, it's actually an intermediate CA certificate.
## Prerequisites
Edge Daemon issues module server and identity certificates for use by Edge modul
### Renewal
-Server certificates may be issued off the Edge CA certificate or through a DPS-configured CA. Regardless of the issuance method, these certificates must be renewed by the module.
+Server certificates may be issued off the Edge CA certificate. Regardless of the issuance method, these certificates must be renewed by the module. If you develop a custom module, you must implement the renewal logic in your module.
+
+The *edgeHub* module supports a certificate renewal feature. You can configure the *edgeHub* module server certificate renewal using the following environment variables:
+
+* **ServerCertificateRenewAfterInMs**: Sets the duration in milliseconds when the *edgeHub* server certificate is renewed irrespective of certificate expiry time.
+* **MaxCheckCertExpiryInMs**: Sets the duration in milliseconds when *edgeHub* service checks the *edgeHub* server certificate expiration. If the variable is set, the check happens irrespective of certificate expiry time.
+
+For more information about the environment variables, see [EdgeHub and EdgeAgent environment variables](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md).
## Changes in 1.2 and later
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
Title: Understand how IoT Edge uses certificates for security+ description: How Azure IoT Edge uses certificate to validate devices, modules, and downstream devices enabling secure connections between them.
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Previously updated : 02/09/2023 Last updated : 08/15/2023
IoT Hub enforces other operational limits:
| Operation | Limit | | | -- |
-| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. The only way to increase this limit is to contact [Microsoft Support](https://azure.microsoft.com/support/options/).|
+| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. |
| File uploads | 10 concurrent file uploads per device. | | Jobs<sup>1</sup> | Maximum concurrent jobs are 1 (for Free and S1), 5 (for S2), and 10 (for S3). However, the max concurrent [device import/export jobs](iot-hub-bulk-identity-mgmt.md) is 1 for all tiers. <br/>Job history is retained up to 30 days. | | Additional endpoints | Basic and standard SKU hubs may have 10 additional endpoints. Free SKU hubs may have one additional endpoint. |
key-vault Assign Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/assign-access-policy.md
description: How to use the Azure CLI to assign a Key Vault access policy to a s
tags: azure-resource-manager-+
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/best-practices.md
Azure Key Vault safeguards encryption keys and secrets like certificates, connec
## Use separate key vaults
-Our recommendation is to use a vault per application per environment (development, pre-production, and production), per region. This helps you not share secrets across environments and regions. It will also reduce the threat in case of a breach.
+Our recommendation is to use a vault per application per environment (development, pre-production, and production), per region. Granular isolation helps you not share secrets across applications, environments and regions, and it also reduce the threat if there is a breach.
### Why we recommend separate key vaults
Key vaults define security boundaries for stored secrets. Grouping secrets into
Encryption keys and secrets like certificates, connection strings, and passwords are sensitive and business critical. You need to secure access to your key vaults by allowing only authorized applications and users. [Azure Key Vault security features](security-features.md) provides an overview of the Key Vault access model. It explains authentication and authorization. It also describes how to secure access to your key vaults.
-Suggestions for controlling access to your vault are as follows:
+Recommendations for controlling access to your vault are as follows:
- Lock down access to your subscription, resource group, and key vaults using role-based access control (RBAC).-- Create access policies for every vault.-- Use the principle of least privilege access to grant access.-- Turn on firewall and [virtual network service endpoints](overview-vnet-service-endpoints.md).
+ - Assign RBAC roles at Key Vault scope for applications, services, and workloads requiring persistent access to Key Vault
+ - Assign just-in-time eligible RBAC roles for operators, administrators and other user accounts requiring privileged access to Key Vault using [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md)
+ - Require at least one approver
+ - Enforce multi-factor authentication
+- Restrict network access with [Private Link](private-link-service.md), [firewall and virtual networks](network-security.md)
## Turn on data protection for your vault
For more information, see [Azure Key Vault soft-delete overview](soft-delete-ove
## Backup
-Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault.
+Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios, when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault.
For more information about backup, see [Azure Key Vault backup and restore](backup.md)
A multitenant solution is built on an architecture where components are used to
## Frequently Asked Questions: ### Can I use Key Vault role-based access control (RBAC) permission model object-scope assignments to provide isolation for application teams within Key Vault?
-No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but any administrative operations like network access control, monitoring, and objects management require vault level permissions which will then expose secure information to operators across application teams.
+No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but only for read. Any administrative operations like network access control, monitoring, and objects management require vault level permissions. Having one Key Vault per application provides secure isolation for operators across application teams.
## Next steps
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignments for 'Microsoft Azure App Service' global identity.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model, but you can use Azure PowerShell, Azure CLI, ARM template deployments. App Service certificate management requires **Key Vault Secrets User** and **Key Vault Reader** role assignments for App Service global identity, for example Microsoft Azure App Service' in public cloud.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
Title: Prepare Windows lab template
description: Prepare a Windows-based lab template in Azure Lab Services. Configure commonly used software and OS settings, such as Windows Update, OneDrive, and Microsoft 365. +
Install other apps commonly used for teaching through the Windows Store app. Sug
## Next steps -- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)
+- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)
load-balancer Load Balancer Test Frontend Reachability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md
Based on the current health probe state of your backend instances, you receive d
## Usage considerations - ICMP pings can't be disabled and are allowed by default on Standard Public Load Balancers.
+- ICMP pings with packet sizes larger than 64 bytes will be dropped, leading to timeouts.
> [!NOTE] > ICMP ping requests are not sent to the backend instances; they are handled by the Load Balancer.
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
Previously updated : 07/18/2022 Last updated : 08/17/2023 -+ #Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs. # Quickstart: Create an internal load balancer to load balance VMs using the Azure portal
-Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer for a backend pool with two virtual machines. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
+Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer for a backend pool with two virtual machines. Other resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
:::image type="content" source="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png" alt-text="Diagram of resources deployed for internal load balancer.":::
+> [!NOTE]
+> In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Get started with Azure Load Balancer by using the Azure portal to create an inte
Sign in to the [Azure portal](https://portal.azure.com).
+## Create NAT gateway
+
+All outbound internet traffic traverses the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+1. Select **+ Create**.
+
+1. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **CreateIntLBQS-rg** in Name. </br> Select **OK**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **East US**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+1. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+
+1. Select **Create a new public IP address** under **Public IP addresses**.
+
+1. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
+
+1. Select **OK**.
+
+1. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+1. Select **Create**.
+ ## Create the virtual network When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
A private IP address in the virtual network is configured as the frontend for th
An Azure Bastion host is created to securely manage the virtual machines and install IIS.
-In this section, you'll create a virtual network, subnet, and Azure Bastion host.
+In this section, you create a virtual network, subnet, and Azure Bastion host.
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-2. In **Virtual networks**, select **+ Create**.
+1. In **Virtual networks**, select **+ Create**.
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+1. In **Create virtual network**, enter or select this information in the **Basics** tab:
| **Setting** | **Value** | ||--| | **Project Details** | | | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **CreateIntLBQS-rg**. </br> Select **OK**. |
+ | Resource Group | Select **CreateIntLBQS-rg**. |
| **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **West US 3** |
+ | Region | Select **East US** |
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. Select the **IP Addresses** tab or select the **Next** button at the bottom of the page.
-5. In the **IP Addresses** tab, enter this information:
+1. In the **IP Addresses** tab, enter this information:
| Setting | Value | |--|-| | IPv4 address space | Enter **10.1.0.0/16** |
-6. Under **Subnet name**, select the word **default**.
+1. Under **Subnets**, select the word **default**.
-7. In **Edit subnet**, enter this information:
+1. In **Edit subnet**, enter this information:
| Setting | Value | |--|-| | Subnet name | Enter **myBackendSubnet** | | Subnet address range | Enter **10.1.0.0/24** |
+ | **Security** | |
+ | NAT Gateway | Select **myNATgateway**. |
-8. Select **Save**.
+1. Select **Add**.
-9. Select the **Security** tab.
+1. Select the **Security** tab.
-10. Under **BastionHost**, select **Enable**. Enter this information:
+1. Under **BastionHost**, select **Enable**. Enter this information:
| Setting | Value | |--|-| | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
+ | Public IP Address | Select **Create new**. </br> Enter **myBastionIP** in Name. </br> Select **OK**. |
> [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] >
-11. Select the **Review + create** tab or select the **Review + create** button.
+1. Select the **Review + create** tab or select the **Review + create** button.
-12. Select **Create**.
+1. Select **Create**.
> [!NOTE]
In this section, you'll create a virtual network, subnet, and Azure Bastion host
In this section, you create a load balancer that load balances virtual machines.
-During the creation of the load balancer, you'll configure:
+During the creation of the load balancer, you configure:
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
+- Frontend IP address
+- Backend pool
+- Inbound load-balancing rules
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-2. In the **Load balancer** page, select **Create**.
+1. In the **Load balancer** page, select **Create**.
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
- | Setting | Value |
- | | |
+ | Setting | Value |
+ | | |
| **Project details** | | | Subscription | Select your subscription. | | Resource group | Select **CreateIntLBQS-rg**. | | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **West US 3**. |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **East US**. |
| SKU | Leave the default **Standard**. |
- | Type | Select **Internal**. |
+ | Type | Select **Internal**. |
| Tier | Leave the default of **Regional**. | - :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/create-standard-internal-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-
-6. Enter **myFrontend** in **Name**.
-
-7. Select **myBackendSubnet** in **Subnet**.
-
-8. Select **Dynamic** for **Assignment**.
-
-9. Select **Zone-redundant** in **Availability zone**.
-
-10. Select **Add**.
-
-11. Select **Next: Backend pools** at the bottom of the page.
-
-12. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-13. Enter **myBackendPool** for **Name** in **Add backend pool**.
+1. Select **Next: Frontend IP configuration** at the bottom of the page.
-14. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter or select the following information:
-15. Select **IPv4** or **IPv6** for **IP version**.
-
-16. Select **Add**.
-
-17. Select the **Next: Inbound rules** button at the bottom of the page.
-
-18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-
-19. In **Add load balancing rule**, enter or select the following information:
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myFrontend** |
+ | Private IP address version | Select **IPv4** or **IPv6** depending on your requirements. |
| Setting | Value | | - | -- |
+ | Name | Enter **myFrontend** |
+ | Virtual network | Select **myVNet** |
+ | Subnet | Select **myBackendSubnet** |
+ | Assignment | Select **Dynamic** |
+ | Availability zone | Select **Zone-redundant** |
+
+1. Select **Add**.
+1. Select **Next: Backend pools** at the bottom of the page.
+1. In the **Backend pools** tab, select **+ Add a backend pool**.
+1. Enter **myBackendPool** for **Name** in **Add backend pool**.
+1. Select **IP Address** for **Backend Pool Configuration**.
+1. Select **Save**.
+1. Select the **Next: Inbound rules** button at the bottom of the page.
+1. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+1. In **Add load balancing rule**, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
| Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **myFrontend**. |
During the creation of the load balancer, you'll configure:
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |
- | TCP reset | Select **Enabled**. |
- | Floating IP | Select **Disabled**. |
-
-20. Select **Add**.
-
-21. Select the blue **Review + create** button at the bottom of the page.
-
-22. Select **Create**.
-
- > [!NOTE]
- > In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
- > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
-
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
+ | Enable TCP reset | Select **checkbox** . |
+ | Enable Floating IP | Leave the default of unselected. |
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+1. Select **Save**.
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateIntLBQS-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Region | Select **West US 3**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
+1. Select the blue **Review + create** button at the bottom of the page.
-5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-
-6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-
-9. In **Virtual network**, select **myVNet**.
-
-10. Select **myBackendSubnet** under **Subnet name**.
-
-11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-
-12. Select **Create**.
+1. Select **Create**.
## Create virtual machines
-In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
+In this section, you create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
These VMs are added to the backend pool of the load balancer that was created earlier.
These VMs are added to the backend pool of the load balancer that was created ea
| Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **(US) West US 3** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **1** | | Security type | Select **Standard**. |
These VMs are added to the backend pool of the load balancer that was created ea
## Create test virtual machine
-In this section, you'll create a VM named **myTestVM**. This VM will be used to test the load balancer configuration.
+In this section, you create a VM named **myTestVM**. This VM is used to test the load balancer configuration.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
In this section, you'll create a VM named **myTestVM**. This VM will be used to
| Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myTestVM** |
- | Region | Select **(US) West US 3** |
+ | Region | Select **(US) East US** |
| Availability Options | Select **No infrastructure redundancy required** | | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Image | Select **Windows Server 2022 Datacenter - x64 Gen2** |
| Azure Spot instance | Leave the default of unselected. | | Size | Choose VM size or take default setting | | **Administrator account** | |
In this section, you'll create a VM named **myTestVM**. This VM will be used to
## Test the load balancer
-In this section, you'll test the load balancer by connecting to the **myTestVM** and verifying the webpage.
+In this section, you test the load balancer by connecting to the **myTestVM** and verifying the webpage.
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
In this section, you'll test the load balancer by connecting to the **myTestVM**
7. Enter the username and password entered during VM creation.
-8. Open **Internet Explorer** on **myTestVM**.
+8. Open **Microsoft Edge** on **myTestVM**.
9. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**. :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot shows a browser window displaying the customized page, as expected." border="true":::
-To see the load balancer distribute traffic across both VMs, you can force-refresh your web browser from the client machine.
+1. To see the load balancer distribute traffic across both VMs, navigate to the VM shown in the browser message, and stop the VM.
+1. Refresh the browser window. The page should still display the customized page. The load balancer is now only sending traffic to the remaining VM.
## Clean up resources
When no longer needed, delete the resource group, load balancer, and all related
In this quickstart, you:
-* Created an internal Azure Load Balancer
+- Created an internal Azure Load Balancer
-* Attached 2 VMs to the load balancer
+- Attached 2 VMs to the load balancer
-* Configured the load balancer traffic rule, health probe, and then tested the load balancer
+- Configured the load balancer traffic rule, health probe, and then tested the load balancer
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
Follow these steps to [update the subnet settings](/azure/virtual-network/virtua
### Starting the load test fails with `Management Lock is enabled on Resource Group of VNET (ALTVNET015)` If there is a lock on the resource group that contains the virtual network, the service can't inject the test engine virtual machines in your virtual network. Remove the management lock before running the load test. Learn how to [configure locks in the Azure portal](/azure/azure-resource-manager/management/lock-resources?tabs=json#configure-locks).
-
+
+### Starting the load test fails with `Insufficient public IP address quota in VNET subscription (ALTVNET016)`
+
+When you start the load test, Azure Load Testing injects the following Azure resources in the virtual network that contains the application endpoint:
+
+- The test engine virtual machines. These VMs invoke your application endpoint during the load test.
+- A public IP address.
+- A network security group (NSG).
+- An Azure Load Balancer.
+
+Ensure that you have quota for at least one public IP address available in your subscription to use in the load test.
+
+### Starting the load test fails with `Subnet with name "AzureFirewallSubnet" cannot be used for load testing (ALTVNET017)`
+
+The subnet *AzureFirewallSubnet* is reserved and you can't use it for Azure Load Testing. Select another subnet for your load test.
+ ## Next steps - Learn more about the [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md).
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
The following settings work only for workflows that start with a recurrence-base
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. The minimum value for this setting is 7 days. |
+| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. The minimum value for this setting is 7 days. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. |
| `Runtime.FlowMaintenanceJob.RetentionCooldownInterval` | `7.00:00:00` <br>(7 days) | Sets the amount of time in days as the interval between when to check for and delete run history that you no longer want to keep. | <a name="run-actions"></a>
logic-apps Logic Apps Control Flow Conditional Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-conditional-statement.md
Title: Add conditions to workflows
-description: Create conditions that control actions in workflows in Azure Logic Apps.
+description: Create conditions that control workflow actions in Azure Logic Apps.
ms.suite: integration
Last updated 08/08/2023
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-To specify a condition that returns either true or false and have your workflow run either one action path or another based on the result, add the **Control** action named **Condition** to your workflow. You can nest conditions inside each other.
+When you want to set up a condition that returns true or false and have the result determine whether your workflow runs one path of actions or another, add the **Control** action named **Condition** to your workflow. You can also nest conditions inside each other.
For example, suppose you have a workflow that sends too many emails when new items appear on a website's RSS feed. You can add the **Condition** action to send email only when the new item includes a specific word. > [!NOTE] >
-> To specify more than two paths from which your workflow can choose or if the condition criteria isn't restricted
-> to only true or false, use a [*switch statement* instead](logic-apps-control-flow-switch-statement.md).
+> If you want to specify more than two paths from which your workflow can choose
+> or condition criteria that's not restricted to only true or false, use a
+> [*switch action* instead](logic-apps-control-flow-switch-statement.md).
-This how-to guide shows how to add a condition to your workflow and use the result to help your workflow choose from two action paths.
+This guide shows how to add a condition to your workflow and use the result to help your workflow choose between two action paths.
## Prerequisites
This how-to guide shows how to add a condition to your workflow and use the resu
### [Consumption](#tab/consumption)
-1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. [Follow these general steps to add the **Condition** action to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. In the **Condition** action, follow these steps build your condition:
+1. In the **Condition** action, follow these steps to build your condition:
- 1. In the left **Choose a value** box, specify the first value or field that you want to compare.
+ 1. In the left-side box named **Choose a value**, enter the first value or field that you want to compare.
- When you select inside the left box, the dynamic content list opens so that you can select outputs from previous steps in your workflow.
+ When you select inside the **Choose a value** box, the dynamic content list opens automatically. From this list, you can select outputs from previous steps in your workflow.
This example selects the RSS trigger output named **Feed summary**. ![Screenshot shows Azure portal, Consumption workflow designer. RSS trigger, and Condition action with criteria construction.](./media/logic-apps-control-flow-conditional-statement/edit-condition-consumption.png)
- 1. From the middle list, select the operation to perform.
+ 1. Open the middle list, select the operation to perform.
This example selects **contains**.
- 1. In the right **Choose a value** box, specify the value or field that you want to compare with the first.
+ 1. In the right-side box named **Choose a value**, enter the value or field that you want to compare with the first.
This example specifies the following string: **Microsoft**
- The following example shows the complete condition:
+ The complete condition now looks like the following example:
![Screenshot shows the Consumption workflow and the complete condition criteria.](./media/logic-apps-control-flow-conditional-statement/complete-condition-consumption.png)
- - To add another row to your condition, open the **Add** menu, and select **Add row**.
+ - To add another row to your condition, from the **Add** menu, select **Add row**.
- - To add a group with subconditions, open the **Add** menu, and select **Add group**.
+ - To add a group with subconditions, from the **Add** menu, select **Add group**.
- To group existing rows, select the checkboxes for those rows, select the ellipses (...) button for any row, and then select **Make group**.
-1. In the **True** and **False** action paths, add the actions to run based on whether the condition is true or false, respectively, for example:
+1. In the **True** and **False** action paths, add the actions that you want to run, based on whether the condition is true or false respectively, for example:
![Screenshot shows the Consumption workflow and the condition with true and false paths.](./media/logic-apps-control-flow-conditional-statement/condition-true-false-path-consumption.png)
This how-to guide shows how to add a condition to your workflow and use the resu
### [Standard](#tab/standard)
-1. In the [Azure portal](https://portal.azure.com), open your Standard logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. [Follow these general steps to add the **Condition** action to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. On the designer, select the **Condition** action to open the information pane and follow these steps build your condition:
+1. On the designer, select the **Condition** action to open the information pane. Follow these steps to build your condition:
+
+ 1. In the left-side box named **Choose a value**, enter the first value or field that you want to compare.
+
+ After you select inside the **Choose a value** box, the options to open the dynamic content list (lightning icon) or expression editor (formula icon) appear.
+
+ 1. Select the lightning icon to open the dynamic content list.
- 1. In the left **Choose a value** box, specify the first value or field that you want to compare. When you select inside the left box, select the lightning button that appears to open the dynamic content list so that you can select outputs from previous steps in your workflow.
+ From this list, you can select outputs from previous steps in your workflow.
![Screenshot shows Azure portal, Standard workflow designer, RSS trigger, and Condition action with information pane open, and dynamic content button selected.](./media/logic-apps-control-flow-conditional-statement/open-dynamic-content-standard.png)
This how-to guide shows how to add a condition to your workflow and use the resu
This example selects **contains**.
- 1. In the right **Choose a value** box, specify the value or field that you want to compare with the first.
+ 1. In the right-side box named **Choose a value**, enter the value or field that you want to compare with the first.
This example specifies the following string: **Microsoft**
This how-to guide shows how to add a condition to your workflow and use the resu
![Screenshot shows the Standard workflow and the complete condition criteria.](./media/logic-apps-control-flow-conditional-statement/complete-condition-standard.png)
- - To add another row to your condition, open the **New item** menu, and select **Add Row**.
+ - To add another row to your condition, from the **New item** menu, select **Add Row**.
- - To add a group with subconditions, open the **New item** menu, and select **Add Group**.
+ - To add a group with subconditions, from the **New item** menu, select **Add Group**.
- To group existing rows, select the checkboxes for those rows, select the ellipses (...) button for any row, and then select **Make Group**.
-1. In the **True** and **False** action paths, add the actions to run based on whether the condition is true or false, respectively, for example:
+1. In the **True** and **False** action paths, add the actions to run, based on whether the condition is true or false respectively, for example:
![Screenshot shows the Standard workflow and the condition with true and false paths.](./media/logic-apps-control-flow-conditional-statement/condition-true-false-path-standard.png)
This workflow now sends mail only when the new items in the RSS feed meet your c
## JSON definition
-The following shows the high-level code definition behind the **Condition** action, but for the full definition, see [If action - Schema reference guide for trigger and action types in Azure Logic Apps](logic-apps-workflow-actions-triggers.md#if-action).
+The following code shows the high-level JSON definition for the **Condition** action. For the full definition, see [If action - Schema reference guide for trigger and action types in Azure Logic Apps](logic-apps-workflow-actions-triggers.md#if-action).
``` json "actions": {
The following shows the high-level code definition behind the **Condition** acti
* [Run steps based on different values (switch actions)](logic-apps-control-flow-switch-statement.md) * [Run and repeat steps (loops)](logic-apps-control-flow-loops.md) * [Run or merge parallel steps (branches)](logic-apps-control-flow-branches.md)
-* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md)
+* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md)
logic-apps Logic Apps Enterprise Integration As2 Mdn Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md
Title: AS2 MDN acknowledgments
description: Learn about Message Disposition Notification (MDN) acknowledgments for AS2 messages in Azure Logic Apps. ms.suite: integration-- Previously updated : 08/23/2022 Last updated : 08/15/2023 # MDN acknowledgments for AS2 messages in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration As2 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-message-settings.md
Previously updated : 08/23/2022 Last updated : 08/15/2023 # Reference for AS2 message settings in agreements for Azure Logic Apps
Last updated 08/23/2022
This reference describes the properties that you can set in an AS2 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
-<a name="AS2-incoming-messages"></a>
+<a name="as2-inbound-messages"></a>
## AS2 Receive settings
-![Select "Receive Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png)
+![Screenshot shows Azure portal and AS2 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png)
| Property | Required | Description | |-|-|-| | **Override message properties** | No | Overrides the properties on incoming messages with your property settings. |
-| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
+| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
| **Message should be compressed** | No | Specifies whether all incoming messages must be compressed. Non-compressed messages are rejected. | | **Disallow Message ID duplicates** | No | Specifies whether to allow messages with duplicate IDs. If you disallow duplicate IDs, select the number of days between checks. You can also choose whether to suspend duplicates. | | **MDN Text** | No | Specifies the default message disposition notification (MDN) that you want sent to the message sender. |
-| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. |
+| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. |
| **Send signed MDN** | No | Specifies whether to send signed MDNs for received messages. If you require signing, from the **MIC Algorithm** list, select the algorithm to use for signing messages. | | **Send asynchronous MDN** | No | Specifies whether to send MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. |
-||||
-<a name="AS2-outgoing-messages"></a>
+<a name="as2-outbound-messages"></a>
## AS2 Send settings
-![Select "Send Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png)
+![Screenshot shows Azure portal and AS2 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png)
| Property | Required | Description | |-|-|-|
-| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <p>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <p>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <br><br>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
+| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <br><br>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
| **Enable message compression** | No | Specifies whether all outgoing messages must be compressed. | | **Unfold HTTP headers** | No | Puts the HTTP `content-type` header onto a single line. | | **Transmit file name in MIME header** | No | Specifies whether to include the file name in the MIME header. |
This reference describes the properties that you can set in an AS2 agreement for
| **Request asynchronous MDN** | No | Specifies whether to receive MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. | | **Enable NRR** | No | Specifies whether to require non-repudiation receipt (NRR). This communication attribute provides evidence that the data was received as addressed. | | **SHA2 Algorithm format** | No | Specifies the MIC algorithm format to use for signing in the headers for the outgoing AS2 messages or MDN |
-||||
## Next steps
-[Exchange AS2 messages](../logic-apps/logic-apps-enterprise-integration-as2.md)
+[Exchange AS2 messages](logic-apps-enterprise-integration-as2.md)
logic-apps Logic Apps Enterprise Integration As2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md
Previously updated : 10/20/2022 Last updated : 08/15/2023 # Exchange AS2 messages using workflows in Azure Logic Apps
To send and receive AS2 messages in workflows that you create using Azure Logic
Except for tracking capabilities, the **AS2 (v2)** connector provides the same capabilities as the original **AS2** connector, runs natively with the Azure Logic Apps runtime, and offers significant performance improvements in message size, throughput, and latency. Unlike the original **AS2** connector, the **AS2 (v2)** connector doesn't require that you create a connection to your integration account. Instead, as described in the prerequisites, make sure that you link your integration account to the logic app resource where you plan to use the connector.
-This article shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this article use the [Request](../connectors/connectors-native-reqres.md) trigger.
+This how-to guide shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
## Connector technical reference
The **AS2 (v2)** connector has no triggers. The following table describes the ac
* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) to define and store artifacts for use in enterprise integration and B2B workflows.
- > [!IMPORTANT]
- >
- > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
-* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario.
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the AS2 operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario.
-* An [AS2 agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type.
+ * Defines an [AS2 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md).
* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**.
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**.
-
-1. From the actions list, select the action named **AS2 Encode**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-consumption.png)
-
-1. In the action information box, provide the following information.
+1. In the action information box, provide the following information:
| Property | Required | Description | |-|-|-|
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **Encode to AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **AS2 Encode**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-built-in-standard.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. In the action information pane, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **Encode to AS2 message**.
-
- ![Screenshot showing the Azure portal, workflow designer for Standard, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-message-managed-standard.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**.
-
-1. From the actions list, select the action named **AS2 Decode**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Decode" action selected.](media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. In the action information box, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **Decode AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Decode AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **AS2 Decode**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Decode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-built-in-standard.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. In the action information pane, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **Decode AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "Decode AS2 message" operation selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-message-managed-standard.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
logic-apps Logic Apps Enterprise Integration Edifact Contrl Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-contrl-acknowledgment.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # CONTRL acknowledgments and error codes for EDIFACT messages in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration Edifact Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-message-settings.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # Reference for EDIFACT message settings in agreements for Azure Logic Apps
Last updated 08/20/2022
This reference describes the properties that you can set in an EDIFACT agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
-<a name="EDIFACT-inbound-messages"></a>
+<a name="edifact-inbound-messages"></a>
-## EDIFACT Receive Settings
+## EDIFACT Receive settings
-![Screenshot showing Azure portal, EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png)
+![Screenshot showing Azure portal and EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png)
### Identifiers
This reference describes the properties that you can set in an EDIFACT agreement
|-|-| | **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. |
-|||
### Acknowledgments | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | Return a technical (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send Settings. |
-| **Acknowledgement (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. |
-|||
+| **Acknowledgment (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. |
<a name="receive-settings-schemas"></a>
This reference describes the properties that you can set in an EDIFACT agreement
| **UNH2.5 (Associated Assigned Code)** | The assigned code that is alphanumeric and is 1-6 characters. | | **UNG2.1 (App Sender ID)** |Enter an alphanumeric value with a minimum of one character and a maximum of 35 characters. | | **UNG2.2 (App Sender Code Qualifier)** |Enter an alphanumeric value, with a maximum of four characters. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
-|||
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
### Control Numbers
This reference describes the properties that you can set in an EDIFACT agreement
| **Check for duplicate UNB5 every (days)** | If you chose to disallow duplicate interchange control numbers, you can specify the number of days between running the check. | | **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers (UNG5). | | **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers (UNH1). |
-| **EDIFACT Acknowledgement Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. |
-|||
+| **EDIFACT Acknowledgment Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. |
### Validation
After you finish setting up a validation row, the next row automatically appears
| **Extended Validation** | If the data type isn't EDI, validation runs on the data element requirement and allowed repetition, enumerations, and data element length validation (min and max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove the leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p> - **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The received interchange must have trailing delimiters and separators. |
-|||
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The received interchange must have trailing delimiters and separators. |
### Internal Settings
After you finish setting up a validation row, the next row automatically appears
| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. | | **Preserve Interchange - suspend transaction sets on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, while continuing to process all other transaction sets. | | **Preserve Interchange - suspend interchange on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
-|||
-<a name="EDIFACT-outbound-messages"></a>
+<a name="edifact-outbound-messages"></a>
-## EDIFACT Send Settings
+## EDIFACT Send settings
-![Screenshot showing Azure portal, EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png)
+![Screenshot showing Azure portal and EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png)
### Identifiers
After you finish setting up a validation row, the next row automatically appears
| **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. | | **UNB7 (Application Reference ID)** | An alphanumeric value that is 1-14 characters. |
-|||
### Acknowledgment | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | The host partner that sends the message requests a technical (CONTRL) acknowledgment from the guest partner. |
-| **Acknowledgement (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. |
+| **Acknowledgment (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. |
| **Generate SG1/SG4 loop for accepted transaction sets** | If you chose to request a functional acknowledgment, this setting forces the generation of SG1/SG4 loops in the functional acknowledgments for accepted transaction sets. |
-|||
### Schemas
After you finish setting up a validation row, the next row automatically appears
| **UNH2.1 (Type)** | The transaction set type. | | **UNH2.2 (Version)** | The message version number. | | **UNH2.3 (Release)** | The message release number. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
-|||
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
### Envelopes
After you finish setting up an envelope row, the next row automatically appears.
| **UNB10 (Communication Agreement)** | An alphanumeric value that is 1-40 characters. | | **UNB11 (Test Indicator)** | Indicate that the generated interchange is test data. | | **Apply UNA Segment (Service String Advice)** | Generate a UNA segment for the interchange to send. |
-| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <p>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>- **UNG1**: An alphanumeric value that is 1-6 characters. <p>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG6**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <p>- **UNG8**: An alphanumeric value that is 1-14 characters. |
-|||
+| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <br><br>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>- **UNG1**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG6**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG8**: An alphanumeric value that is 1-14 characters. |
### Character Sets and Separators
Other than the character set, you can specify a different set of delimiters to u
| Property | Description | |-|-| | **UNB1.1 (System Identifier)** | The EDIFACT character set to apply to the outbound interchange. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. |
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. |
| **Input Type** | The input type for the message. | | **Component Separator** | A single character to use for separating composite data elements. | | **Data Element Separator** | A single character to use for separating simple data elements within composite data elements. |
Other than the character set, you can specify a different set of delimiters to u
| **UNA5 (Repetition Separator)** | A value to use for the repetition separator that separates segments that repeat within a transaction set. | | **Segment Terminator** | A single character that indicates the end in an EDI segment. | | **Suffix** | The character to use with the segment identifier. If you designate a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you have to designate a suffix. |
-|||
### Control Numbers
Other than the character set, you can specify a different set of delimiters to u
| **UNB5 (Interchange Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate an outbound interchange. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message, while the prefix and suffix stay the same. | | **UNG5 (Group Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate the group control number. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message until the maximum value is reached, while the prefix and suffix stay the same. | | **UNH1 (Message Header Reference Number)** | A prefix, a range of values for the interchange control number, and a suffix. These values are used to generate the message header reference number. The reference number is required, but the prefix and suffix are optional. The prefix and suffix are optional, while the reference number is required. The reference number is incremented for each new message, while the prefix and suffix stay the same. |
-|||
### Validation
After you finish setting up a validation row, the next row automatically appears
| **Extended Validation** | If the data type isn't EDI, run validation on the data element requirement and allowed repetition, enumerations, and data element length validation (min/max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove leading or trailing zero characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The sent interchange must have trailing delimiters and separators. |
-|||
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The sent interchange must have trailing delimiters and separators. |
## Next steps
-[Exchange EDIFACT messages](../logic-apps/logic-apps-enterprise-integration-edifact.md)
+[Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md)
logic-apps Logic Apps Enterprise Integration Edifact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact.md
Previously updated : 09/29/2021 Last updated : 08/15/2023 # Exchange EDIFACT messages using workflows in Azure Logic Apps
-To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides triggers and actions that support and manage EDIFACT communication.
+To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides operations that support and manage EDIFACT communication.
-This article shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. Although you can use any trigger to start your workflow, the examples use the [Request](../connectors/connectors-native-reqres.md) trigger. For more information about the **EDIFACT** connector's triggers, actions, and limits version, review the [connector's reference page](/connectors/edifact/) as documented by the connector's Swagger file.
+This how-to guide shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. The **EDIFACT** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
-![Overview screenshot showing the "Decode EDIFACT message" operation with the message decoding properties.](./media/logic-apps-enterprise-integration-edifact/overview-edifact-message-consumption.png)
+## Connector technical reference
-## EDIFACT encoding and decoding
+The **EDIFACT** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **EDIFACT** connector, see the following documentation:
-The following sections describe the tasks that you can complete using the EDIFACT encoding and decoding actions.
+* [Connector reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+
+* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+
+ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
+
+The following sections provide more information about the tasks that you can complete using the EDIFACT encoding and decoding actions.
### Encode to EDIFACT message action
+This action performs the following tasks:
+ * Resolve the agreement by matching the sender qualifier & identifier and receiver qualifier and identifier. * Serialize the Electronic Data Interchange (EDI), which converts XML-encoded messages into EDI transaction sets in the interchange.
The following sections describe the tasks that you can complete using the EDIFAC
### Decode EDIFACT message action
+This action performs the following tasks:
+ * Validate the envelope against the trading partner agreement. * Resolve the agreement by matching the sender qualifier and identifier along with the receiver qualifier and identifier.
The following sections describe the tasks that you can complete using the EDIFAC
* A functional acknowledgment that acknowledges the acceptance or rejection for the received interchange or group.
-## Connector reference
-
-For technical information about the **EDIFACT** connector, review the [connector's reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file. Also, review the [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) for workflows running in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, or the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
- * Is associated with the same Azure subscription as your logic app resource.
-
- * Exists in the same location or Azure region as your logic app resource.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- * When you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your logic app resource doesn't need a link to your integration account. However, you still need this account to store artifacts, such as partners, agreements, and certificates, along with using the EDIFACT, [X12](logic-apps-enterprise-integration-x12.md), or [AS2](logic-apps-enterprise-integration-as2.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **EDIFACT** operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario.
- * When you use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your workflow requires a connection to your integration account that you create directly from your workflow when you add the AS2 operation.
+ * Defines an [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md).
-* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario.
+ > [!IMPORTANT]
+ >
+ > The EDIFACT connector supports only UTF-8 characters. If your output contains
+ > unexpected characters, check that your EDIFACT messages use the UTF-8 character set.
-* An [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type.
+* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
- > [!IMPORTANT]
- > The EDIFACT connector supports only UTF-8 characters. If your output contains
- > unexpected characters, check that your EDIFACT messages use the UTF-8 character set.
+ | Logic app workflow | Link required? |
+ |--|-|
+ | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. |
+ | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. |
* The logic app resource and workflow where you want to use the EDIFACT operations.
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. For this example, select the action named **Encode to EDIFACT message by agreement name**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" action selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-consumption.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
> [!NOTE]
- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to
- > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by
- > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from
- > the trigger or a preceding action.
+ >
+ > If you want to use **Encode to EDIFACT message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your EDIFACT agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the XML message payload can be the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Encode to EDIFACT message by agreement name**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-standard.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
> [!NOTE]
- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to
- > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by
- > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from
- > the trigger or a preceding action.
+ >
+ > If you want to use **Encode to EDIFACT message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your EDIFACT agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT details pane appears on the designer, provide information for the following properties:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the message payload is the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**.
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**.
-
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **EDIFACT flat file message to decode** | Yes | The XML flat file message to decode. | | Other parameters | No | This operation includes the following other parameters: <p>- **Component separator** <br>- **Data element separator** <br>- **Release indicator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br>- **Payload character set** <br>- **Segment terminator suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the XML message payload to decode can be the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Decode EDIFACT message" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-decode-edifact-message-standard.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT details pane appears on the designer, provide information for the following properties:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the message payload is the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
## Handle UNH2.5 segments in EDIFACT documents
-In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`:
+In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`:
`UNH+SSDD1+ORDERS:D:03B:UN:EAN008`
To handle an EDIFACT document or process an EDIFACT message that has a UN2.5 seg
For example, suppose the schema root name for the sample UNH field is `EFACT_D03B_ORDERS_EAN008`. For each `D03B_ORDERS` that has a different UNH2.5 segment, you have to deploy an individual schema.
-1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, which is based on whether you're working with the **Logic App (Consumption)** or **Logic App (Standard)** resource type respectively.
+1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, based on whether you have a Consumption or Standard logicapp workflow respectively.
1. Whether you're using the EDIFACT decoding or encoding action, upload your schema and set up the schema settings in your EDIFACT agreement's **Receive Settings** or **Send Settings** sections respectively.
logic-apps Logic Apps Enterprise Integration X12 997 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-997-acknowledgment.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # 997 functional acknowledgments and error codes for X12 messages in Azure Logic Apps
The optional AK3 segment reports errors in a data segment and identifies the loc
||-| | AK301 | Mandatory, identifies the segment in error with the X12 segment ID, for example, NM1. | | AK302 | Mandatory, identifies the segment count of the segment in error. The ST segment is `1`, and each segment increments the segment count by one. |
-| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by an Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
+| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by a Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
| AK304 | Optional, specifies the code for the error in the data segment. Although AK304 is optional, the element is required when an error exists for the identified segment. For AK304 error codes, review [997 ACK error codes - Data Segment Note](#997-ack-error-codes). | |||
logic-apps Logic Apps Enterprise Integration X12 Decode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-decode.md
- Title: Decode X12 messages
-description: Validate EDI and generate acknowledgements with X12 message decoder in Azure Logic Apps with Enterprise Integration Pack.
----- Previously updated : 01/27/2017--
-# Decode X12 messages in Azure Logic Apps with Enterprise Integration Pack
-
-With the Decode X12 message connector, you can validate the envelope against a trading partner agreement, validate EDI and partner-specific properties, split interchanges into transactions sets or preserve entire interchanges, and generate acknowledgments for processed transactions.
-To use this connector, you must add the connector to an existing trigger in your logic app.
-
-## Before you start
-
-Here's the items you need:
-
-* An Azure account; you can create a [free account](https://azure.microsoft.com/free)
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md)
-that's already defined and associated with your Azure subscription.
-You must have an integration account to use the Decode X12 message connector.
-* At least two [partners](logic-apps-enterprise-integration-partners.md)
-that are already defined in your integration account
-* An [X12 agreement](logic-apps-enterprise-integration-x12.md)
-that's already defined in your integration account
-
-## Decode X12 messages
-
-1. Create a logic app workflow. For more information, see the following documentation:
-
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-2. The Decode X12 message connector doesn't have triggers,
-so you must add a trigger for starting your logic app, like a Request trigger.
-In the Logic App Designer, add a trigger, and then add an action to your logic app.
-
-3. In the search box, enter "x12" for your filter.
-Select **X12 - Decode X12 message**.
-
- ![Search for "x12"](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage1.png)
-
-3. If you didn't previously create any connections to your integration account,
-you're prompted to create that connection now. Name your connection,
-and select the integration account that you want to connect.
-
- ![Provide integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage4.png)
-
- Properties with an asterisk are required.
-
- | Property | Details |
- | | |
- | Connection Name * |Enter any name for your connection. |
- | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. |
-
-5. When you're done, your connection details should look similar to this example.
-To finish creating your connection, choose **Create**.
-
- ![integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage5.png)
-
-6. After your connection is created, as shown in this example,
-select the X12 flat file message to decode.
-
- ![integration account connection created](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage6.png)
-
- For example:
-
- ![Select X12 flat file message for decoding](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage7.png)
-
- > [!NOTE]
- > The actual message content or payload for the message array, good or bad,
- > is base64 encoded. So, you must specify an expression that processes this content.
- > Here is an example that processes the content as XML that you can
- > enter in code view
- > or by using expression builder in the designer.
- > ``` json
- > "content": "@xml(base64ToBinary(item()?['Payload']))"
- > ```
- > ![Content example](media/logic-apps-enterprise-integration-x12-decode/content-example.png)
- >
--
-## X12 Decode details
-
-The X12 Decode connector performs these tasks:
-
-* Validates the envelope against trading partner agreement
-* Validates EDI and partner-specific properties
- * EDI structural validation, and extended schema validation
- * Validation of the structure of the interchange envelope.
- * Schema validation of the envelope against the control schema.
- * Schema validation of the transaction-set data elements against the message schema.
- * EDI validation performed on transaction-set data elements
-* Verifies that the interchange, group, and transaction set control numbers are not duplicates
- * Checks the interchange control number against previously received interchanges.
- * Checks the group control number against other group control numbers in the interchange.
- * Checks the transaction set control number against other transaction set control numbers in that group.
-* Splits the interchange into transaction sets, or preserves the entire interchange:
- * Split Interchange as transaction sets - suspend transaction sets on error:
- Splits interchange into transaction sets and parses each transaction set.
- The X12 Decode action outputs only those transaction sets
- that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
- * Split Interchange as transaction sets - suspend interchange on error:
- Splits interchange into transaction sets and parses each transaction set.
- If one or more transaction sets in the interchange fail validation,
- the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`.
- * Preserve Interchange - suspend transaction sets on error:
- Preserve the interchange and process the entire batched interchange.
- The X12 Decode action outputs only those transaction sets that fail validation to `badMessages`,
- and outputs the remaining transactions sets to `goodMessages`.
- * Preserve Interchange - suspend interchange on error:
- Preserve the interchange and process the entire batched interchange.
- If one or more transaction sets in the interchange fail validation,
- the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`.
-* Generates a Technical and/or Functional acknowledgment (if configured).
- * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
- * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document
-
-## View the swagger
-See the [swagger details](/connectors/x12/).
-
-## Next steps
-[Learn more about the Enterprise Integration Pack](../logic-apps/logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack")
-
logic-apps Logic Apps Enterprise Integration X12 Encode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-encode.md
- Title: Encode X12 messages
-description: Validate EDI and convert XML-encoded messages with X12 message encoder in Azure Logic Apps with Enterprise Integration Pack.
----- Previously updated : 01/27/2017--
-# Encode X12 messages in Azure Logic Apps with Enterprise Integration Pack
-
-With the Encode X12 message connector, you can validate EDI and partner-specific properties,
-convert XML-encoded messages into EDI transaction sets in the interchange,
-and request a Technical Acknowledgement, Functional Acknowledgment, or both.
-To use this connector, you must add the connector to an existing trigger in your logic app.
-
-## Before you start
-
-Here's the items you need:
-
-* An Azure account; you can create a [free account](https://azure.microsoft.com/free)
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md)
-that's already defined and associated with your Azure subscription.
-You must have an integration account to use the Encode X12 message connector.
-* At least two [partners](logic-apps-enterprise-integration-partners.md)
-that are already defined in your integration account
-* An [X12 agreement](logic-apps-enterprise-integration-x12.md)
-that's already defined in your integration account
-
-## Encode X12 messages
-
-1. Create a logic app workflow. For more information, see the following documentation:
-
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-2. The Encode X12 message connector doesn't have triggers,
-so you must add a trigger for starting your logic app, like a Request trigger.
-In the Logic App Designer, add a trigger, and then add an action to your logic app.
-
-3. In the search box, enter "x12" for your filter.
-Select either **X12 - Encode to X12 message by agreement name**
-or **X12 - Encode to X12 message by identities**.
-
- ![Search for "x12"](./media/logic-apps-enterprise-integration-x12-encode/x12decodeimage1.png)
-
-3. If you didn't previously create any connections to your integration account,
-you're prompted to create that connection now. Name your connection,
-and select the integration account that you want to connect.
-
- ![integration account connection](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage1.png)
-
- Properties with an asterisk are required.
-
- | Property | Details |
- | | |
- | Connection Name * |Enter any name for your connection. |
- | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. |
-
-5. When you're done, your connection details should look similar to this example.
-To finish creating your connection, choose **Create**.
-
- ![integration account connection created](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage2.png)
-
- Your connection is now created.
-
- ![integration account connection details](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage3.png)
-
-#### Encode X12 messages by agreement name
-
-If you chose to encode X12 messages by agreement name,
-open the **Name of X12 agreement** list,
-enter or select your existing X12 agreement. Enter the XML message to encode.
-
-![Enter X12 agreement name and XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage4.png)
-
-#### Encode X12 messages by identities
-
-If you choose to encode X12 messages by identities, enter the sender identifier,
-sender qualifier, receiver identifier, and receiver qualifier as
-configured in your X12 agreement. Select the XML message to encode.
-
-![Provide identities for sender and receiver, select XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage5.png)
-
-## X12 Encode details
-
-The X12 Encode connector performs these tasks:
-
-* Agreement resolution by matching sender and receiver context properties.
-* Serializes the EDI interchange, converting XML-encoded messages into EDI transaction sets in the interchange.
-* Applies transaction set header and trailer segments
-* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange
-* Replaces separators in the payload data
-* Validates EDI and partner-specific properties
- * Schema validation of the transaction-set data elements against the message Schema
- * EDI validation performed on transaction-set data elements.
- * Extended validation performed on transaction-set data elements
-* Requests a Technical and/or Functional acknowledgment (if configured).
- * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver
- * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document
-
-## View the swagger
-See the [swagger details](/connectors/x12/).
-
-## Next steps
-[Learn more about the Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack")
-
logic-apps Logic Apps Enterprise Integration X12 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-message-settings.md
+
+ Title: X12 message settings
+description: Reference guide for X12 message settings in agreements for Azure Logic Apps with Enterprise Integration Pack.
+
+ms.suite: integration
++++ Last updated : 08/15/2023++
+# Reference for X12 message settings in agreements for Azure Logic Apps
++
+This reference describes the properties that you can set in an X12 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
+
+<a name="x12-inbound-messages"></a>
+
+## X12 Receive Settings
+
+![Screenshot showing Azure portal and X12 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-receive-settings.png)
+
+<a name="inbound-identifiers"></a>
+
+### Identifiers
+
+| Property | Description |
+|-|-|
+| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. |
+| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. |
+| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+
+<a name="inbound-acknowledgment"></a>
+
+### Acknowledgment
+
+| Property | Description |
+|-|-|
+| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <br><br>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. |
+
+<a name="inbound-schemas"></a>
+
+### Schemas
+
+For this section, select a [schema](logic-apps-enterprise-integration-schemas.md) from your [integration account](logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Version** | The X12 version for the schema |
+| **Transaction Type (ST01)** | The transaction type |
+| **Sender Application (GS02)** | The sender application |
+| **Schema** | The schema file that you want to use |
+
+<a name="inbound-envelopes"></a>
+
+### Envelopes
+
+| Property | Description |
+|-|-|
+| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
+
+<a name="inbound-control-numbers"></a>
+
+### Control Numbers
+
+| Property | Description |
+|-|-|
+| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <br><br><br><br>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. |
+| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. |
+| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. |
+
+<a name="inbound-validations"></a>
+
+### Validations
+
+The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Message Type** | The EDI message type |
+| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
+| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
+| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
+| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. |
+
+<a name="inbound-internal-settings"></a>
+
+### Internal Settings
+
+| Property | Description |
+|-|-|
+| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. |
+| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. |
+| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. |
+| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
+| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. |
+| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. |
+
+<a name="x12-outbound-settings"></a>
+
+## X12 Send settings
+
+![Screenshot showing Azure portal and X12 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-send-settings.png)
+
+<a name="outbound-identifiers"></a>
+
+### Identifiers
+
+| Property | Description |
+|-|-|
+| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. |
+| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. |
+| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+
+<a name="outbound-acknowledgment"></a>
+
+### Acknowledgment
+
+| Property | Description |
+|-|-|
+| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+
+<a name="outbound-schemas"></a>
+
+### Schemas
+
+For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Version** | The X12 version for the schema |
+| **Transaction Type (ST01)** | The transaction type for the schema |
+| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. |
+
+<a name="outbound-envelopes"></a>
+
+### Envelopes
+
+| Property | Description |
+|-|-|
+| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
+
+<a name="outbound-control-version-number"></a>
+
+#### Control Version Number
+
+For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Control Version Number (ISA12)** | The version of the X12 standard |
+| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data |
+| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. |
+| **GS1** | Optional, select the functional code. |
+| **GS2** | Optional, specify the application sender. |
+| **GS3** | Optional, specify the application receiver. |
+| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. |
+| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. |
+| **GS7** | Optional, select a value for the responsible agency. |
+| **GS8** | Optional, specify the schema document version. |
+
+<a name="outbound-control-numbers"></a>
+
+### Control Numbers
+
+| Property | Description |
+|-|-|
+| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 |
+| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 |
+| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <br><br>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value |
+
+<a name="outbound-character-sets-separators"></a>
+
+### Character Sets and Separators
+
+The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears.
+
+> [!TIP]
+>
+> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
+
+| Property | Description |
+|-|-|
+| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. |
+| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. |
+| **Input Type** | The input type for the character set |
+| **Component Separator** | A single character that separates composite data elements |
+| **Data Element Separator** | A single character that separates simple data elements within composite data |
+| **Replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message |
+| **Segment Terminator** | A single character that indicates the end of an EDI segment |
+| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. |
+
+<a name="outbound-validation"></a>
+
+### Validation
+
+The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Message Type** | The EDI message type |
+| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
+| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
+| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
+| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. |
+
+<a name="hipaa-schemas"></a>
+
+## HIPAA schemas and message types
+
+When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+
+`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
+
+This table lists the affected message types, any variants, and the document version numbers that map to those message types:
+
+| Message type or variant | Description | Document version number (GS8) |
+|-|--|-|
+| 277 | Health Care Information Status Notification | 005010X212 |
+| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 |
+| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 |
+| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 |
+
+You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid.
+
+To specify these document version numbers and message types, follow these steps:
+
+1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use.
+
+ For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
+
+ To update your schema, follow these steps:
+
+ 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema).
+
+ 1. In your agreement's message settings, select the revised schema.
+
+1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
+
+ For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values:
+
+ ```json
+ "schemaReferences": [
+ {
+ "messageId": "837",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837"
+ }
+ ]
+ ```
+
+ In this `schemaReferences` section, add another entry that has these values:
+
+ * `"messageId": "837_P"`
+ * `"schemaVersion": "00501"`
+ * `"schemaName": "X12_00501_837_P"`
+
+ When you're done, your `schemaReferences` section looks like this:
+
+ ```json
+ "schemaReferences": [
+ {
+ "messageId": "837",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837"
+ },
+ {
+ "messageId": "837_P",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837_P"
+ }
+ ]
+ ```
+
+1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values.
+
+ ![Screenshot shows X12 agreement settings to disable validation for all message types or each message type.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-disable-validation.png)
+
+## Next steps
+
+[Exchange X12 messages](logic-apps-enterprise-integration-x12.md)
logic-apps Logic Apps Enterprise Integration X12 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12.md
Title: Exchange X12 messages for B2B integration
-description: Send, receive, and process X12 messages when building B2B enterprise integration solutions with Azure Logic Apps and the Enterprise Integration Pack.
+ Title: Exchange X12 messages in B2B workflows
+description: Exchange X12 messages between partners by creating workflows with Azure Logic Apps and Enterprise Integration Pack.
ms.suite: integration -+ Previously updated : 08/20/2022 Last updated : 08/15/2023
-# Exchange X12 messages for B2B enterprise integration using Azure Logic Apps and Enterprise Integration Pack
+# Exchange X12 messages using workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In Azure Logic Apps, you can create workflows that work with X12 messages by using **X12** operations. These operations include triggers and actions that you can use in your workflow to handle X12 communication. You can add X12 triggers and actions in the same way as any other trigger and action in a workflow, but you need to meet extra prerequisites before you can use X12 operations.
+To send and receive X12 messages in workflows that you create using Azure Logic Apps, use the **X12** connector, which provides operations that support and manage X12 communication.
-This article describes the requirements and settings for using X12 triggers and actions in your workflow. If you're looking for EDIFACT messages instead, review [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md). If you're new to logic apps, see [What is Azure Logic Apps](logic-apps-overview.md) and the following documentation:
+This how-to guide shows how to add the X12 encoding and decoding actions to an existing logic app workflow. The **X12** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
-* [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+## Connector technical reference
-* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
+The **X12** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **X12** connector, see the following documentation:
+
+* [Connector reference page](/connectors/x12/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+
+* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+
+ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A logic app resource and workflow where you want to use an X12 trigger or action. To use an X12 trigger, you need a blank workflow. To use an X12 action, you need a workflow that has an existing trigger.
+* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app resource. Both your logic app and integration account have to use the same Azure subscription and exist in the same Azure region or location.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- Your integration account also need to include the following B2B artifacts:
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **X12** operation used in your workflow. The definitions for both partners must use the same X12 business identity qualifier.
- * At least two [trading partners](logic-apps-enterprise-integration-partners.md) that use the X12 identity qualifier.
-
- * An X12 [agreement](logic-apps-enterprise-integration-agreements.md) defined between your trading partners. For information about settings to use when receiving and sending messages, review [Receive Settings](#receive-settings) and [Send Settings](#send-settings).
+ * Defines an [X12 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md).
> [!IMPORTANT]
+ >
> If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, you have to add a
- > `schemaReferences` section to your agreement. For more information, review [HIPAA schemas and message types](#hipaa-schemas).
+ > `schemaReferences` section to your agreement. For more information, see [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas).
- * The [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
+ * Defines the [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
> [!IMPORTANT]
- > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](#hipaa-schemas).
-
-## Connector reference
-
-For more technical information about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/x12/).
-
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [B2B message limits for ISE](../logic-apps/logic-apps-limits-and-config.md#b2b-protocol-limits).
-
-<a name="receive-settings"></a>
-
-## Receive Settings
-
-After you set the properties in your trading partner agreement, you can configure how this agreement identifies and handles inbound messages that you receive from your partner through this agreement.
-
-1. Under **Add**, select **Receive Settings**.
-
-1. Based on the agreement with the partner that exchanges messages with you, set the properties in the **Receive Settings** pane, which is organized into the following sections:
-
- * [Identifiers](#inbound-identifiers)
- * [Acknowledgement](#inbound-acknowledgement)
- * [Schemas](#inbound-schemas)
- * [Envelopes](#inbound-envelopes)
- * [Control Numbers](#inbound-control-numbers)
- * [Validations](#inbound-validations)
- * [Internal Settings](#inbound-internal-settings)
-
-1. When you're done, make sure to save your settings by selecting **OK**.
-
-<a name="inbound-identifiers"></a>
-
-### Receive Settings - Identifiers
-
-![Identifier properties for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-identifiers.png)
-
-| Property | Description |
-|-|-|
-| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. |
-| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. |
-| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-|||
+ >
+ > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas).
-<a name="inbound-acknowledgement"></a>
+* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
-### Receive Settings - Acknowledgement
+ | Logic app workflow | Link required? |
+ |--|-|
+ | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. |
+ | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. |
-![Acknowledgement for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-acknowledgement.png)
+* The logic app resource and workflow where you want to use the X12 operations.
-| Property | Description |
-|-|-|
-| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <p>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <p>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. |
+ For more information, see the following documentation:
-<a name="inbound-schemas"></a>
+ * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-### Receive Settings - Schemas
+ * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-![Schemas for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-schemas.png)
+<a name="encode"></a>
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears.
+## Encode X12 messages
-| Property | Description |
-|-|-|
-| **Version** | The X12 version for the schema |
-| **Transaction Type (ST01)** | The transaction type |
-| **Sender Application (GS02)** | The sender application |
-| **Schema** | The schema file that you want to use |
-|||
+The **Encode to X12 message** operation performs the following tasks:
-<a name="inbound-envelopes"></a>
+* Resolves the agreement by matching sender and receiver context properties.
+* Serializes the EDI interchange and converts XML-encoded messages into EDI transaction sets in the interchange.
+* Applies transaction set header and trailer segments.
+* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange.
+* Replaces separators in the payload data.
+* Validates EDI and partner-specific properties.
+ * Schema validation of transaction-set data elements against the message schema.
+ * EDI validation on transaction-set data elements.
+ * Extended validation on transaction-set data elements.
+* Requests a Technical and Functional Acknowledgment, if configured.
+ * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
+ * Generates a Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document.
-### Receive Settings - Envelopes
+### [Consumption](#tab/consumption)
-![Separators to use in transaction sets for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-envelopes.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-<a name="inbound-control-numbers"></a>
+ > [!NOTE]
+ >
+ > If you want to use **Encode to X12 message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your X12 agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-### Receive Settings - Control Numbers
+1. When prompted, provide the following connection information for your integration account:
-![Handling control number duplicates for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-control-numbers.png)
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for the connection |
+ | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. |
-| Property | Description |
-|-|-|
-| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <p><p>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. |
-| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. |
-| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. |
-|||
+ For example:
-<a name="inbound-validations"></a>
+ ![Screenshot showing Consumption workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-consumption.png)
-### Receive Settings - Validations
+1. When you're done, select **Create**.
-![Validations for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-validations.png)
+1. In the X12 action information box, provide the following property values:
-The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Name of X12 agreement** | Yes | The X12 agreement to use. |
+ | **XML message to encode** | Yes | The XML message to encode |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-| Property | Description |
-|-|-|
-| **Message Type** | The EDI message type |
-| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
-| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
-| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
-| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. |
-|||
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload:
-<a name="inbound-internal-settings"></a>
+ ![Screenshot showing Consumption workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-consumption.png)
-### Receive Settings - Internal Settings
+### [Standard](#tab/standard)
-![Internal settings for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-internal-settings.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. |
-| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. |
-| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. |
-| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
-| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. |
-| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-<a name="send-settings"></a>
+ > [!NOTE]
+ >
+ > If you want to use **Encode to X12 message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your X12 agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-## Send Settings
+1. When prompted, provide the following connection information for your integration account:
-After you set the agreement properties, you can configure how this agreement identifies and handles outbound messages that you send to your partner through this agreement.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection Name** | Yes | A name for the connection |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
-1. Under **Add**, select **Send Settings**.
+ For example:
-1. Configure these properties based on your agreement with the partner that exchanges messages with you. For property descriptions, see the tables in this section.
+ ![Screenshot showing Standard workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-standard.png)
- The **Send Settings** are organized into these sections:
+1. When you're done, select **Create**.
- * [Identifiers](#outbound-identifiers)
- * [Acknowledgement](#outbound-acknowledgement)
- * [Schemas](#outbound-schemas)
- * [Envelopes](#outbound-envelopes)
- * [Control Version Number](#outbound-control-version-number)
- * [Control Numbers](#outbound-control-numbers)
- * [Character Sets and Separators](#outbound-character-sets-separators)
- * [Validation](#outbound-validation)
+1. In the X12 action information box, provide the following property values:
-1. When you're done, make sure to save your settings by selecting **OK**.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Name Of X12 Agreement** | Yes | The X12 agreement to use. |
+ | **XML Message To Encode** | Yes | The XML message to encode |
+ | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-<a name="outbound-identifiers"></a>
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload:
-### Send Settings - Identifiers
+ ![Screenshot showing Standard workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-standard.png)
-![Identifier properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-identifiers.png)
-
-| Property | Description |
-|-|-|
-| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. |
-| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. |
-| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-|||
-
-<a name="outbound-acknowledgement"></a>
-
-### Send Settings - Acknowledgement
-
-![Acknowledgement properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-acknowledgement.png)
-
-| Property | Description |
-|-|-|
-| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgements. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgement from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-|||
-
-<a name="outbound-schemas"></a>
-
-### Send Settings - Schemas
-
-![Schemas for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-schemas.png)
-
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears.
-
-| Property | Description |
-|-|-|
-| **Version** | The X12 version for the schema |
-| **Transaction Type (ST01)** | The transaction type for the schema |
-| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. |
-|||
-
-<a name="outbound-envelopes"></a>
-
-### Send Settings - Envelopes
-
-![Separators in a transaction set to use for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-envelopes.png)
-
-| Property | Description |
-|-|-|
-| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
-|||
-
-<a name="outbound-control-version-number"></a>
-
-### Send Settings - Control Version Number
+
-![Control version number for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-version-number.png)
+<a name="decode"></a>
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears.
+## Decode X12 messages
-| Property | Description |
-|-|-|
-| **Control Version Number (ISA12)** | The version of the X12 standard |
-| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data |
-| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. |
-| **GS1** | Optional, select the functional code. |
-| **GS2** | Optional, specify the application sender. |
-| **GS3** | Optional, specify the application receiver. |
-| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. |
-| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. |
-| **GS7** | Optional, select a value for the responsible agency. |
-| **GS8** | Optional, specify the schema document version. |
-|||
+The **Decode X12 message** operation performs the following tasks:
-<a name="outbound-control-numbers"></a>
+* Validates the envelope against trading partner agreement.
-### Send Settings - Control Numbers
+* Validates EDI and partner-specific properties.
-![Control numbers for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-numbers.png)
+ * EDI structural validation and extended schema validation
+ * Interchange envelope structural validation
+ * Schema validation of the envelope against the control schema
+ * Schema validation of the transaction set data elements against the message schema
+ * EDI validation on transaction-set data elements
-| Property | Description |
-|-|-|
-| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 |
-| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 |
-| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <p>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value |
-|||
+* Verifies that the interchange, group, and transaction set control numbers aren't duplicates.
-<a name="outbound-character-sets-separators"></a>
+ * Checks the interchange control number against previously received interchanges.
+ * Checks the group control number against other group control numbers in the interchange.
+ * Checks the transaction set control number against other transaction set control numbers in that group.
-### Send Settings - Character Sets and Separators
+* Splits an interchange into transaction sets, or preserves the entire interchange:
-![Delimiters for message types in outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-character-sets-separators.png)
+ * Split the interchange into transaction sets or suspend transaction sets on error: Parse each transaction set. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears.
+ * Split the interchange into transaction sets or suspend interchange on error: Parse each transaction set. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`.
-> [!TIP]
-> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
+ * Preserve the interchange or suspend transaction sets on error: Preserve the interchange and process the entire batched interchange. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-| Property | Description |
-|-|-|
-| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. |
-| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. |
-| **Input Type** | The input type for the character set |
-| **Component Separator** | A single character that separates composite data elements |
-| **Data Element Separator** | A single character that separates simple data elements within composite data |
-| **replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message |
-| **Segment Terminator** | A single character that indicates the end of an EDI segment |
-| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. |
-|||
+ * Preserve the interchange or suspend interchange on error: Preserve the interchange and process the entire batched interchange. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`.
-<a name="outbound-validation"></a>
+* Generates a Technical and Functional Acknowledgment, if configured.
-### Send Settings - Validation
+ * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
+ * Generates a Functional Acknowledgment as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document.
-![Validation properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-validation.png)
+### [Consumption](#tab/consumption)
-The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **Message Type** | The EDI message type |
-| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
-| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
-| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
-| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-<a name="hipaa-schemas"></a>
+1. When prompted, provide the following connection information for your integration account:
-## HIPAA schemas and message types
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for the connection |
+ | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. |
-When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+ For example:
-`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
+ ![Screenshot showing Consumption workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-consumption.png)
-This table lists the affected message types, any variants, and the document version numbers that map to those message types:
+1. When you're done, select **Create**.
-| Message type or variant | Description | Document version number (GS8) |
-|-|--|-|
-| 277 | Health Care Information Status Notification | 005010X212 |
-| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 |
-| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 |
-| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 |
-|||
+1. In the X12 action information box, provide the following property values:
-You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid.
+ | Property | Required | Description |
+ |-|-|-|
+ | **X12 flat file message to decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-To specify these document version numbers and message types, follow these steps:
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression:
-1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use.
+ ![Screenshot showing Consumption workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-consumption.png)
- For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
+### [Standard](#tab/standard)
- To update your schema, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
- 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema).
+1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- 1. In your agreement's message settings, select the revised schema.
+1. When prompted, provide the following connection information for your integration account:
-1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection Name** | Yes | A name for the connection |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
- For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values:
+ For example:
- ```json
- "schemaReferences": [
- {
- "messageId": "837",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- }
- ]
- ```
+ ![Screenshot showing Standard workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-standard.png)
- In this `schemaReferences` section, add another entry that has these values:
+1. When you're done, select **Create**.
- * `"messageId": "837_P"`
- * `"schemaVersion": "00501"`
- * `"schemaName": "X12_00501_837_P"`
+1. In the X12 action information box, provide the following property values:
- When you're done, your `schemaReferences` section looks like this:
+ | Property | Required | Description |
+ |-|-|-|
+ | **X12 Flat File Message To Decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** |
+ | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
- ```json
- "schemaReferences": [
- {
- "messageId": "837",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- },
- {
- "messageId": "837_P",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837_P"
- }
- ]
- ```
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression:
-1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values.
+ ![Screenshot showing Standard workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-standard.png)
- ![Disable validation for all message types or each message type](./media/logic-apps-enterprise-integration-x12/x12-disable-validation.png)
+ ## Next steps * [X12 TA1 technical acknowledgments and error codes](logic-apps-enterprise-integration-x12-ta1-acknowledgment.md) * [X12 997 functional acknowledgments and error codes](logic-apps-enterprise-integration-x12-997-acknowledgment.md)
-* [Managed connectors for Azure Logic Apps](../connectors/managed.md)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
+* [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md)
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
In a Standard logic app workflow that starts with the Request trigger (but not a
* An inbound call to the request endpoint can use only one authorization scheme, either Azure AD OAuth or [Shared Access Signature (SAS)](#sas). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because Azure Logic Apps doesn't know which scheme to choose.
- To enable Azure AD OAuth so that this option is the only way to call the request endpoint, use the following steps:
-
- 1. To enable the capability to check the OAuth access token, [follow the steps to include 'Authorization' header in the Request or HTTP webhook trigger outputs](#include-auth-header).
-
- > [!NOTE]
- >
- > This step makes the `Authorization` header visible in the workflow's run history
- > and in the trigger's outputs.
-
- 1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
-
- 1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**.
-
- 1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter the following expression, and select **Done**.
-
- `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')`
-
- > [!NOTE]
- > If you call the trigger endpoint without the correct authorization,
- > the run history just shows the trigger as `Skipped` without any
- > message that the trigger condition has failed.
-
-* Only [Bearer-type](../active-directory/develop/active-directory-v2-protocols.md#tokens) authorization schemes are supported for Azure AD OAuth access tokens, which means that the `Authorization` header for the access token must specify the `Bearer` type.
+* Azure Logic Apps supports either the [bearer type](../active-directory/develop/active-directory-v2-protocols.md#tokens) or [proof-of-possession type (Consumption logic app only)](/entra/msal/dotnet/advanced/proof-of-possession-tokens) authorization schemes for Azure AD OAuth access tokens. However, the `Authorization` header for the access token must specify either the `Bearer` type or `PoP` type. For more information about how to get and use a PoP token, see [Get a Proof of Possession (PoP) token](#get-pop).
* Your logic app resource is limited to a maximum number of authorization policies. Each authorization policy also has a maximum number of [claims](../active-directory/develop/developer-glossary.md#claim). For more information, review [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#authentication-limits).
In a Standard logic app workflow that starts with the Request trigger (but not a
} ```
+#### Enable Azure AD OAuth as the only option to call a request endpoint
+
+1. Set up your Request or HTTP webhook trigger with the capability to check the OAuth access token by [following the steps to include the 'Authorization' header in the Request or HTTP webhook trigger outputs](#include-auth-header).
+
+ > [!NOTE]
+ >
+ > This step makes the `Authorization` header visible in the
+ > workflow's run history and in the trigger's outputs.
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
+
+1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**.
+
+1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter either of the following expressions, based on the token type you want to use, and select **Done**.
+
+ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')`
+
+ -or-
+
+ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'PoP')`
+
+If you call the trigger endpoint without the correct authorization, the run history just shows the trigger as `Skipped` without any message that the trigger condition has failed.
+
+<a name="get-pop"></a>
+
+#### Get a Proof-of-Possession (PoP) token
+
+The Microsoft Authentication Library (MSAL) libraries provide PoP tokens for you to use. If the logic app workflow that you want to call requires a PoP token, you can get this token using the MSAL libraries. The following samples show how to acquire PoP tokens:
+
+* [A .NET Core daemon console application calling a protected Web API with its own identity](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)
+
+* [SignedHttpRequest aka PoP (Proof of Possession)](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki/SignedHttpRequest-aka-PoP-(Proof-of-Possession))
+
+To use the PoP token with your Consumption logic app, follow the next section to [set up Azure AD OAuth](#enable-azure-ad-inbound).
+ <a name="enable-azure-ad-inbound"></a> #### Enable Azure AD OAuth for your Consumption logic app resource
In the [Azure portal](https://portal.azure.com), add one or more authorization p
1. On the logic app menu, under **Settings**, select **Authorization**. After the Authorization pane opens, select **Add policy**.
- ![Select "Authorization" > "Add policy"](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png)
+ ![Screenshot that shows Azure portal, Consumption logic app menu, Authorization page, and selected button to add policy.](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png)
1. Provide information about the authorization policy by specifying the [claim types](../active-directory/develop/developer-glossary.md#claim) and values that your logic app expects in the access token presented by each inbound call to the Request trigger:
- ![Provide information for authorization policy](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
+ ![Screenshot that shows Azure portal, Consumption logic app Authorization page, and information for authorization policy.](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
| Property | Required | Type | Description | |-|-||-| | **Policy name** | Yes | String | The name that you want to use for the authorization policy |
- | **Claims** | Yes | String | The claim types and values that your workflow accepts from inbound calls. Here are the available claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. |
+ | **Policy type** | Yes | String | Either **AAD** for bearer type tokens or **AADPOP** for Proof-of-Possession type tokens. |
+ | **Claims** | Yes | String | A key-value pair that specifies the claim type and value that the workflow's Request trigger expects in the access token presented by each inbound call to the trigger. You can add any standard claim you want by selecting **Add standard claim**. To add a claim that's specific to a PoP token, select **Add custom claim**. <br><br>Available standard claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br><br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br><br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. |
+
+ The following example shows the information for a PoP token:
+
+ ![Screenshot that shows Azure portal, Consumption logic app Authorization page, and information for a proof-of-possession policy.](./media/logic-apps-securing-a-logic-app/pop-policy-example.png)
1. To add another claim, select from these options:
In the [Azure portal](https://portal.azure.com), add one or more authorization p
1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request and HTTP webhook trigger outputs](#include-auth-header).
-Workflow properties such as policies don't appear in your logic app's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource
+Workflow properties such as policies don't appear in your workflow's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource
<a name="define-authorization-policy-template"></a>
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
ms.suite: integration Previously updated : 08/07/2023 Last updated : 08/18/2023 tags: connectors
tags: connectors
This multipart how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the SAP connector. You can use the SAP connector's operations to create automated workflows that run when triggered by events in your SAP server or in other systems and run actions to manage resources on your SAP server.
-Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference).
## SAP compatibility
The SAP connector has different versions, based on [logic app type and host envi
|--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector (preview), which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
## Connector differences
The SAP built-in connector significantly differs from the SAP managed connector
The SAP built-in connector doesn't use the shared or global connector infrastructure, which means timeouts are longer at 5 minutes compared to the SAP managed connector (two minutes) and the SAP ISE-versioned connector (four minutes). Long-running requests work without you having to implement the [long-running webhook-based request action pattern](logic-apps-scenario-function-sb-trigger.md).
-* By default, the preview SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md).
+* By default, the SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md).
In stateful mode, the SAP built-in connector supports high availability and horizontal scale-out configurations. By comparison, the SAP managed connector has restrictions regarding the on-premises data gateway limited to a single instance for triggers and to clusters only in failover mode for actions. For more information, see [SAP managed connector - Known issues and limitations](#known-issues-limitations).
Along with simple string and number inputs, the SAP connector accepts the follow
1. In the action named **\[BAPI] Call method in SAP**, disable the auto-commit feature. 1. Call the action named **\[BAPI] Commit transaction** instead.
-### SAP built-in connector
-
-The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code:
-
- ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
The preview SAP built-in connector trigger named **Register SAP RFC server for t
> When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites).
-* By default, the preview SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
+* By default, the SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
* To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks:
The preview SAP built-in connector trigger named **Register SAP RFC server for t
> In Standard workflows, the SAP built-in trigger named **Register SAP RFC server for trigger** uses the Azure > Functions trigger instead, and shows only the actual callbacks from SAP.
+ * For the SAP built-in connector trigger named **Register SAP RFC server for trigger**, you have to enable virtual network integration and private ports by following the article at [Enabling Service Bus and SAP built-in connectors for stateful Logic Apps in Standard](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/enabling-service-bus-and-sap-built-in-connectors-for-stateful/ba-p/3820381). You can also run the workflow in Visual Studio Code to fire the trigger locally. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code:
+
+ - **WEBSITE_PRIVATE_IP**: Set this environment variable value to **127.0.0.1** as the localhost address.
+ - **WEBSITE_PRIVATE_PORTS**: Set this environment variable value to two free and usable ports on your local computer, separating the values with a comma (**,**), for example, **8080,8088**.
+ * The message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](/connectors/sap/#actions) that you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](sap-create-example-scenario-workflows.md#send-flat-file-idocs). <a name="network-prerequisites"></a>
For a Consumption workflow in multi-tenant Azure Logic Apps, the SAP managed con
<a name="single-tenant-prerequisites"></a>
-For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway.
+For a Standard workflow in single-tenant Azure Logic Apps, use the SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway. For additional requirements regarding the SAP built-in connector trigger named **Register SAP RFC server for trigger**, see [Prerequisites](#prerequisites).
1. To use the SAP connector, you need to download the following files and have them read to upload to your Standard logic app resource. For more information, see [SAP NCo client library prerequisites](#sap-client-library-prerequisites):
For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *
1. In the **net472** folder, upload the assembly files larger than 4 MB.
-#### SAP trigger requirements
-
-The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code:
-
- ### [ISE](#tab/ise) <a name="ise-prerequisites"></a>
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
See the [AutoML package](/python/api/azure-ai-ml/azure.ai.ml.automl) for changin
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
Each Series in Own Group (1:1) | All Series in Single Group (N:1)
-| -- Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster
-More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb).
+More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb).
## Next steps
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](includes/machine-learning-automl-sdk-version.md)]
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
You can start by reading the [Set up AutoML to train a time-series forecasting m
- [Bike share example](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb) - [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb)-- [Many Models solution](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)-- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)-- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
+- [Many Models solution](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)
+- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)
+- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
## Why is AutoML slow on my data?
To choose between them, note that NRMSE penalizes outliers in the training data
## How can I improve the accuracy of my model? - Ensure that you're configuring AutoML the best way for your data. For more information, see the [What modeling configuration should I use?](#what-modeling-configuration-should-i-use) answer.-- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models. -- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb).
+- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models.
+- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb).
- If the data is noisy, consider aggregating it to a coarser frequency to increase the signal-to-noise ratio. For more information, see [Frequency and target data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation). - Add new features that can help predict the target. Subject matter expertise can help greatly when you're selecting training data. - Compare validation and test metric values, and determine if the selected model is underfitting or overfitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to overfitting.
AutoML supports the following advanced prediction scenarios:
- Forecasting beyond the forecast horizon - Forecasting when there's a gap in time between training and forecasting periods
-For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
## How do I view metrics from forecasting training jobs?
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
Title: Create and manage instance types for efficient compute resource utilization
-description: Learn about what is instance types, and how to create and manage them, and what are benefits of using instance types
+ Title: Create and manage instance types for efficient utilization of compute resources
+description: Learn about what instance types are, how to create and manage them, and what the benefits of using them are.
-# Create and manage instance types for efficient compute resource utilization
+# Create and manage instance types for efficient utilization of compute resources
-## What are instance types?
+Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure virtual machine, an example of an instance type is `STANDARD_D2_V3`.
-Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure VM, an example for an instance type is `STANDARD_D2_V3`.
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that's installed with the Azure Machine Learning extension. Two elements in the Azure Machine Learning extension represent the instance types:
-In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the Azure Machine Learning extension. Two elements in Azure Machine Learning extension represent the instance types:
-[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
-and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+- Use [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) to specify which node a pod should run on. The node must have a corresponding label.
+- In the [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) section, you can set the compute resources (CPU, memory, and NVIDIA GPU) for the pod.
-In short, a `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label. In the `resources` section, you can set the compute resources (CPU, memory and NVIDIA GPU) for the pod.
+If you [specify a nodeSelector field when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the `nodeSelector` field will be applied to all instance types. This means that:
->[!IMPORTANT]
->
-> If you have [specified a nodeSelector when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
-> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector.
-> - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector.
-> - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector.
+- For each instance type that you create, the specified `nodeSelector` field should be a subset of the extension-specified `nodeSelector` field.
+- If you use an instance type with `nodeSelector`, the workload will run on any node that matches both the extension-specified `nodeSelector` field and the instance-type-specified `nodeSelector` field.
+- If you use an instance type without a `nodeSelector` field, the workload will run on any node that matches the extension-specified `nodeSelector` field.
+## Create a default instance type
-## Default instance type
-
-By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace:
-- If you don't apply a `nodeSelector`, it means the pod can get scheduled on any node.-- The workload's pods are assigned default resources with 0.1 cpu cores, 2-GB memory and 0 GPU for request.-- The resources used by the workload's pods are limited to 2 cpu cores and 8-GB memory:
+By default, an instance type called `defaultinstancetype` is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace. Here's the definition:
```yaml resources:
resources:
nvidia.com/gpu: null ```
-> [!NOTE]
-> - The default instance type purposefully uses little resources. To ensure all ML workloads run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
-> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
-> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
+If you don't apply a `nodeSelector` field, the pod can be scheduled on any node. The workload's pods are assigned default resources with 0.1 CPU cores, 2 GB of memory, and 0 GPUs for the request. The resources that the workload's pods use are limited to 2 CPU cores and 8 GB of memory.
+
+The default instance type purposefully uses few resources. To ensure that all machine learning workloads run with appropriate resources (for example, GPU resource), we highly recommend that you [create custom instance types](#create-a-custom-instance-type).
+
+Keep in mind the following points about the default instance type:
+
+- `defaultinstancetype` doesn't appear as an `InstanceType` custom resource in the cluster when you're running the command ```kubectl get instancetype```, but it does appear in all clients (UI, Azure CLI, SDK).
+- `defaultinstancetype` can be overridden with the definition of a custom instance type that has the same name.
-### Create custom instance types
+## Create a custom instance type
-To create a new instance type, create a new custom resource for the instance type CRD. For example:
+To create a new instance type, create a new custom resource for the instance type CRD. For example:
```bash kubectl apply -f my_instance_type.yaml ```
-With `my_instance_type.yaml`:
+Here are the contents of *my_instance_type.yaml*:
+ ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceType
spec:
memory: "1500Mi" ```
-The following steps create an instance type with the labeled behavior:
-- Pods are scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods are assigned resource requests of `700m` CPU and `1500Mi` memory.-- Pods are assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.
+The preceding code creates an instance type with the labeled behavior:
-Creation of custom instance types must meet the following parameters and definition rules, otherwise the instance type creation fails:
+- Pods are scheduled only on nodes that have the label `mylabel: mylabelvalue`.
+- Pods are assigned resource requests of `700m` for CPU and `1500Mi` for memory.
+- Pods are assigned resource limits of `1` for CPU, `2Gi` for memory, and `1` for NVIDIA GPU.
-| Parameter | Required | Description |
-| | | |
-| name | required | String values, which must be unique in cluster.|
-| CPU request | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.|
-| Memory request | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
-| CPU limit | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.|
-| Memory limit | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
-| GPU | optional | Integer values, which can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). |
-| nodeSelector | optional | Map of string keys and values. |
+Creation of custom instance types must meet the following parameters and definition rules, or it will fail:
+| Parameter | Required or optional | Description |
+| | | |
+| `name` | Required | String values, which must be unique in a cluster.|
+| `CPU request` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `Memory request` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 mebibytes (MiB).|
+| `CPU limit` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `Memory limit` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
+| `GPU` | Optional | Integer values, which can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). |
+| `nodeSelector` | Optional | Map of string keys and values. |
It's also possible to create multiple instance types at once:
It's also possible to create multiple instance types at once:
kubectl apply -f my_instance_type_list.yaml ```
-With `my_instance_type_list.yaml`:
+Here are the contents of *my_instance_type_list.yaml*:
+ ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceTypeList
items:
memory: "1Gi" ```
-The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition created when Kubernetes cluster was attached to Azure Machine Learning workspace.
+The preceding example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition that was created when you attached the Kubernetes cluster to the Azure Machine Learning workspace.
-If you submit a training or inference workload without an instance type, it uses the `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype`. It's automatically recognized as the default.
+If you submit a training or inference workload without an instance type, it uses `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with the name `defaultinstancetype`. It's automatically recognized as the default.
+## Select an instance type to submit a training job
-### Select instance type to submit training job
+### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli)
-#### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli)
-
-To select an instance type for a training job using CLI (V2), specify its name as part of the
-`resources` properties section in job YAML. For example:
+To select an instance type for a training job by using the Azure CLI (v2), specify its name as part of the
+`resources` properties section in the job YAML. For example:
```yaml command: python -c "print('Hello world!')"
environment:
image: library/python:latest compute: azureml:<Kubernetes-compute_target_name> resources:
- instance_type: <instance_type_name>
+ instance_type: <instance type name>
```
-#### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk)
+### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk)
-To select an instance type for a training job using SDK (V2), specify its name for `instance_type` property in `command` class. For example:
+To select an instance type for a training job by using the SDK (v2), specify its name for the `instance_type` property in the `command` class. For example:
```python from azure.ai.ml import command
command_job = command(
command="python -c "print('Hello world!')"", environment="AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest", compute="<Kubernetes-compute_target_name>",
- instance_type="<instance_type_name>"
+ instance_type="<instance type name>"
) ```+
-In the above example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute
-target and replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to submit the job.
+In the preceding example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute target. Replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to submit the job.
-### Select instance type to deploy model
+## Select an instance type to deploy a model
-#### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli)
+### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli)
-To select an instance type for a model deployment using CLI (V2), specify its name for the `instance_type` property in the deployment YAML. For example:
+To select an instance type for a model deployment by using the Azure CLI (v2), specify its name for the `instance_type` property in the deployment YAML. For example:
```yaml name: blue
environment:
image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest ```
-#### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
+### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
-To select an instance type for a model deployment using SDK (V2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example:
+To select an instance type for a model deployment by using the SDK (v2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example:
```python from azure.ai.ml import KubernetesOnlineDeployment,Model,Environment,CodeConfiguration
blue_deployment = KubernetesOnlineDeployment(
instance_type="<instance type name>", ) ```+
-In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to deploy the model.
+In the preceding example, replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to deploy the model.
> [!IMPORTANT]
->
-> For MLFlow model deployment, the resource request require at least 2 CPU and 4 GB memory, otherwise the deployment will fail.
+> For MLflow model deployment, the resource request requires at least 2 CPU cores and 4 GB of memory. Otherwise, the deployment will fail.
+
+### Resource section validation
-#### Resource section validation
-If you're using the `resource section` to define the resource request and limit of your model deployments, for example:
+You can use the `resources` section to define the resource request and limit of your model deployments. For example:
#### [Azure CLI](#tab/define-resource-to-modeldeployment-with-cli)
resources:
memory: "0.5Gi" instance_type: <instance type name> ```+ #### [Python SDK](#tab/define-resource-to-modeldeployment-with-sdk) ```python
blue_deployment = KubernetesOnlineDeployment(
instance_type="<instance type name>", ) ```+
-If you use the `resource section`, the valid resource definition need to meet the following rules, otherwise the model deployment fails due to the invalid resource definition:
+If you use the `resources` section, a valid resource definition needs to meet the following rules. An invalid resource definition will cause the model deployment to fail.
-| Parameter | If necessary | Description |
+| Parameter | Required or optional | Description |
| | | |
-| `requests:`<br>`cpu:`| Required | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`.|
-| `requests:`<br>`memory:` | Required | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB. <br>Memory can't be less than **1 MBytes**.|
-| `limits:`<br>`cpu:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`. |
-| `limits:`<br>`memory:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB.|
-| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(only required when need GPU) | Integer values, which can't be empty and can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If require CPU only, you can omit the entire `limits` section.|
-
-> [!NOTE]
->
->If the resource section definition is invalid, the deployment will fail.
->
-> The `instance type` is **required** for model deployment. If you have defined the resource section, and it will be validated against the instance type, the rules are as follows:
- > * With a valid resource section definition, the resource limits must be less than instance type limits, otherwise deployment will fail.
- > * If the user does not define instance type, the `defaultinstancetype` will be used to be validated with resource section.
- > * If the user does not define resource section, the instance type will be used to create deployment.
+| `requests:`<br>`cpu:`| Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `requests:`<br>`memory:` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB. <br>Memory can't be less than 1 MB.|
+| `limits:`<br>`cpu:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`. |
+| `limits:`<br>`memory:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 MiB.|
+| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(required only when you need GPU) | Integer values, which can't be empty and can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If you require CPU only, you can omit the entire `limits` section.|
+
+The instance type is *required* for model deployment. If you defined the `resources` section, and it will be validated against the instance type, the rules are as follows:
+- With a valid `resource` section definition, the resource limits must be less than the instance type limits. Otherwise, deployment will fail.
+- If you don't define an instance type, the system uses `defaultinstancetype` for validation with the `resources` section.
+- If you don't define the `resources` section, the system uses the instance type to create the deployment.
## Next steps - [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)-- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure Azure Kubernetes Service inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
The response should provide an access token good for one hour. Make note of the
To create a registry, use the following command. You can edit the JSON to change the inputs as needed. Replace the `<YOUR-ACCESS-TOKEN>` value with the access token retrieved previously: ```bash
-curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2022-12-01-preview -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
+curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2023-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d '
{ "properties": {
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
You can optionally choose to load the MLTable object into Pandas, using:
``` #### Save the data loading steps
-Next, save all your data loading steps into an MLTable file. If you save your data loading steps, you can reproduce your Pandas data frame at a later point in time, and you don't need to redefine the data loading steps in your code.
+Next, save all your data loading steps into an MLTable file. Saving your data loading steps in an MLTable file allows you to reproduce your Pandas data frame at a later point in time, without need to redefine the code each time.
+You can choose to save the MLTable yaml file to a cloud storage, or you can also save it to local paths.
```python
-# serialize the data loading steps into an MLTable file
-tbl.save("./nyc_taxi")
+# save the data loading steps in an MLTable file to a cloud storage
+# NOTE: the tbl object was defined in the previous snippet.
+tbl.save(save_path_dirc= "azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/titanic", collocated=True, show_progress=True, allow_copy_errors=False, overwrite=True)
```
-You can optionally view the contents of the MLTable file, to understand how the data loading steps are serialized into a file:
- ```python
-with open("./nyc_taxi/MLTable", "r") as f:
- print(f.read())
+# save the data loading steps in an MLTable file to local
+# NOTE: the tbl object was defined in the previous snippet.
+tbl.save("./titanic")
```
+> [!IMPORTANT]
+> - If collocated == True, then we will copy the data to the same folder with MLTable yaml file if they are not currently collocated, and we will use relative paths in MLTable yaml.
+> - If collocated == False, we will not move the data and we will use absolute paths for cloud data and use relative paths for local data.
+> - We donΓÇÖt support this parameter combination: data is in local, collocated == False, `save_path_dirc` is a cloud directory. Please upload your local data to cloud and use the cloud data paths for MLTable instead.
+> - Parameters `show_progress` (default as True), `allow_copy_errors` (default as False), `overwrite`(default as True) are optional.
+>
++ ### Reproduce data loading steps Now that the data loading steps have been serialized into a file, you can reproduce them at any point in time, with the load() method. This way, you don't need to redefine your data loading steps in code, and you can more easily share the file.
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
In this article, you learn how to prepare image data for training computer visio
To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an `MLTable`. You can create an `MLTable` from labeled training data in JSONL format.
-If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
+If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model.
## Prerequisites
my_data = Data(
Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type.
-If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs).
+If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs).
Once you have created jsonl file following the above steps, you can register it as a data asset using UI. Make sure you select `stream` type in schema section as shown below.
machine-learning How To Registry Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-registry-network-isolation.md
To connect to a registry that's secured behind a VNet, use one of the following
* [Azure Bastion](/azure/bastion/bastion-overview)ΓÇ»- In this scenario, you create an Azure Virtual Machine (sometimes called a jump box) inside the VNet. You then connect to the VM using Azure Bastion. Bastion allows you to connect to the VM using either an RDP or SSH session from your local web browser. You then use the jump box as your development environment. Since it is inside the VNet, it can directly access the registry. ### Share assets from workspace to registry
+> [!NOTE]
+> Currently, sharing an asset from secure workspace to a Azure machine learning registry is not supported if the storage account containing the asset has public access disabled.
Create a private endpoint to the registry, storage and ACR from the VNet of the workspace. If you're trying to connect to multiple registries, create private endpoint for each registry and associated storage and ACRs. For more information, see the [How to create a private endpoint](#how-to-create-a-private-endpoint) section.
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
When prompted, enter the password you used when creating the VM.
# [Generic model](#tab/model)
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet.sh" id="set_env_vars":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet-legacy.sh" id="set_env_vars":::
# [MLflow model](#tab/mlflow)
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet-mlflow.sh" id="set_env_vars":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet-mlflow-legacy.sh" id="set_env_vars":::
When prompted, enter the password you used when creating the VM.
To delete the endpoint, use the following command: To delete the VM, use the following command: To delete all the resources created in this article, use the following command. Replace `<resource-group-name>` with the name of the resource group used in this example:
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
ONNX is an open-source format for AI models. ONNX supports interoperability betw
- [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
- [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
ml_client.models.create_or_update(run_model) ```
+For more information about models, see [Work with models in Azure Machine Learning](how-to-manage-models.md).
+ ## Mapping of key functionality in SDK v1 and SDK v2 |Functionality in SDK v1|Rough mapping in SDK v2|
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
## Submit AutoML run
-* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb).
+* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb).
```python # Imports
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
For more information, see the documentation here:
* [steps in SDK v1](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py&preserve-view=true) * [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md)
-* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
+* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb)
* [OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true) * [`mldesigner`](https://pypi.org/project/mldesigner/)
machine-learning How To Develop A Standard Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-a-standard-flow.md
We also support the input type of int, bool, double, list and object.
:::image type="content" source="./media/how-to-develop-a-standard-flow/flow-input-datatype.png" alt-text="Screenshot of inputs showing the type drop-down menu with string selected. " lightbox = "./media/how-to-develop-a-standard-flow/flow-input-datatype.png":::
-You should first set the input schema (name: url; type: string), then set a value manually or by:
+## Develop the flow using different tools
-1. Inputting data manually in the value field.
-2. Selecting a row of existing dataset in **fill value from data**.
--
-The dataset selection supports search and autosuggestion.
--
-After selecting a row, the url is backfilled to the value field.
-
-If the existing datasets don't meet your needs, upload new data from files. We support **.csv** and **.txt** for now.
--
-## Develop tool in your flow
-
-In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety and Vector Search.
+In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety, Vector Search and etc.
### Add tool as your need
First define flow output schema, then select in drop-down the node whose output
## Next steps -- [Develop a customized evaluation flow](how-to-develop-an-evaluation-flow.md)
+- [Bulk test using more data and evaluate the flow performance](how-to-bulk-test-evaluate-flow.md)
- [Tune prompts using variants](how-to-tune-prompts-using-variants.md) - [Deploy a flow](how-to-deploy-for-real-time-inference.md)
machine-learning How To Integrate With Langchain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-langchain.md
Prompt Flow can also be used together with the [LangChain](https://python.langch
> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+We introduce the following sections:
+* [Benefits of LangChain integration](#benefits-of-langchain-integration)
+* [How to convert LangChain code into flow](#how-to-convert-langchain-code-into-flow)
+ * [Prerequisites for environment and runtime](#prerequisites-for-environment-and-runtime)
+ * [Convert credentials to prompt flow connection](#convert-credentials-to-prompt-flow-connection)
+ * [LangChain code conversion to a runnable flow](#langchain-code-conversion-to-a-runnable-flow)
+ ## Benefits of LangChain integration We consider the integration of LangChain and Prompt flow as a powerful combination that can help you to build and test your custom language models with ease, especially in the case where you may want to use LangChain modules to initially build your flow and then use our Prompt Flow to easily scale the experiments for bulk testing, evaluating then eventually deploying.
Then you can create a [Prompt flow runtime](./how-to-create-manage-runtime.md) b
:::image type="content" source="./media/how-to-integrate-with-langchain/runtime-custom-env.png" alt-text="Screenshot of flows on the runtime tab with the add compute instance runtime popup. " lightbox = "./media/how-to-integrate-with-langchain/runtime-custom-env.png":::
-### Convert credentials to custom connection
+### Convert credentials to prompt flow connection
+
+When developing your LangChain code, you might have [defined environment variables to store your credentials, such as the AzureOpenAI API KEY](https://python.langchain.com/docs/integrations/llms/azure_openai_example), which is necessary for invoking the AzureOpenAI model.
-Custom connection helps you to securely store and manage secret keys or other sensitive credentials required for interacting with LLM, rather than exposing them in environment variables hard code in your code and running on the cloud, protecting them from potential security breaches.
-#### Create a custom connection
+Instead of directly coding the credentials in your code and exposing them as environment variables when running LangChain code in the cloud, it is recommended to convert the credentials from environment variables into a connection in prompt flow. This allows you to securely store and manage the credentials separately from your code.
-Create a custom connection that stores all your LLM API KEY or other required credentials.
+#### Create a connection
+
+Create a connection that securely stores your credentials, such as your LLM API KEY or other required credentials.
1. Go to Prompt flow in your workspace, then go to **connections** tab.
-2. Select **Create** and select **Custom**.
+2. Select **Create** and select a connection type to store your credentials. (Take custom connection as an example)
:::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-1.png" alt-text="Screenshot of flows on the connections tab highlighting the custom button in the create drop-down menu. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-1.png":::
-1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
+3. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
:::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-2.png" alt-text="Screenshot of add custom connection point to the add key-value pairs button. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-2.png"::: > [!NOTE]
Create a custom connection that stores all your LLM API KEY or other required cr
Then this custom connection is used to replace the key and credential you explicitly defined in LangChain code, if you already have a LangChain integration Prompt flow, you can jump toΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï [Configure connection, input and output](#configure-connection-input-and-output). + ### LangChain code conversion to a runnable flow All LangChain code can directly run in the Python tools in your flow as long as your runtime environment contains the dependency packages, you can easily convert your LangChain code into a flow by following the steps below.
-#### Create a flow with Prompt tools and Python tools
+#### Convert LangChain code to flow structure
> [!NOTE] > There are two ways to convert your LangChain code into a flow.
All LangChain code can directly run in the Python tools in your flow as long as
- To simplify the conversion process, you can directly initialize the LLM model for invocation in a Python node by utilizing the LangChain integrated LLM library. - Another approach is converting your LLM consuming from LangChain code to our LLM tools in the flow, for better further experimental management. - For quick conversion of LangChain code into a flow, we recommend two types of flow structures, based on the use case: || Types | Desc | Case | |-| -- | -- | -- |
-|**Type A**| A flow that includes both **prompt tools** and **python tools**| You can extract your prompt template from your code into a prompt node, then combine the remaining code in a single Python node or multiple Python tools. | This structure is ideal for who want to easily **tune the prompt** by running flow variants and then choose the optimal one based on evaluation results.|
-|**Type B**| A flow that includes **python tools** only| You can create a new flow with python tools only, all code including prompt definition will run in python tools.| This structure is suitable for who don't need to explicit tune the prompt in workspace, but require faster batch testing based on larger scale datasets. |
+|**Type A**| A flow that includes both **prompt nodes** and **python nodes**| You can extract your prompt template from your code into a prompt node, then combine the remaining code in a single Python node or multiple Python tools. | This structure is ideal for who want to easily **tune the prompt** by running flow variants and then choose the optimal one based on evaluation results.|
+|**Type B**| A flow that includes **python nodes** only| You can create a new flow with python nodes only, all code including prompt definition will run in python nodes.| This structure is suitable for who don't need to explicit tune the prompt in workspace, but require faster batch testing based on larger scale datasets. |
For example the type A flow from the chart is like:
To create a flow in Azure Machine Learning, you can go to your workspace, then s
#### Configure connection, input and output
-After you have a properly structured flow and are done moving the code to specific tools, you need to configure the input, output, and connection settings in your flow and code to replace your original definitions.
+After you have a properly structured flow and are done moving the code to specific tool nodes, you need to replace the original environment variables with the corresponding key in the connection, and configure the input and output of the flow.
+
+**Configure connection**
+
+To utilize a connection that replaces the environment variables you originally defined in LangChain code, you need to import promptflow connection library `promptflow.connections` in the python node.
-To utilize a [custom connection](#create-a-custom-connection) that stores all the required keys and credentials, follow these steps:
+For example:
-1. In the python tools, import custom connection library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function.
+If you have a LangChain code that consumes the AzureOpenAI model, you can replace the environment variables with the corresponding key in the Azure OpenAI connection:
+
+Import library `from promptflow.connections import AzureOpenAIConnection`
+++
+For custom connection, you need to follow the steps:
+
+1. Import library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function.
:::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png" alt-text="Screenshot of doc search chain node highlighting the custom connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png"::: 1. Parse the input to the input section, then select your target custom connection in the value dropdown. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png" alt-text="Screenshot of the chain node highlighting the connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png"::: 1. Replace the environment variables that originally defined the key and credential with the corresponding key added in the connection. 1. Save and return to authoring page, and configure the connection parameter in the node input.
+**Configure input and output**
+ Before running the flow, configure the **node input and output**, as well as the overall **flow input and output**. This step is crucial to ensure that all the required data is properly passed through the flow and that the desired results are obtained. ## Next steps
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Title: "Create workspace resources"
+ Title: "Tutorial: Create workspace resources"
description: Create an Azure Machine Learning workspace and cloud resources that can be used to train machine learning models.
Previously updated : 03/15/2023 Last updated : 08/17/2023 adobe-target: true
+content_well_notification:
+ - AI-contribution
#Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
-# Create resources you need to get started
+# Tutorial: Create resources you need to get started
-In this article, you'll create the resources you need to start working with Azure Machine Learning.
+In this tutorial, you will create the resources you need to start working with Azure Machine Learning.
-* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create.
-* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials.
+> [!div class="checklist"]
+>* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create.
+>* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials.
+
+This video shows you how to create a workspace and compute instance. The steps are also described in the sections below.
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=a0e901d2-e82a-4e96-9c7f-3b5467859969]
## Prerequisites
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
In instance segmentation, output consists of multiple boxes with their scaled to
> These settings are currently in public preview. They are provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!WARNING]
-> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
+> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations.
In this section, we document the input data format required to make predictions and generate explanations for the predicted class/classes using a deployed model. There's no separate deployment needed for explainability. The same endpoint for online scoring can be utilized to generate explanations. We just need to pass some extra explainability related parameters in input schema and get either visualizations of explanations and/or attribution score matrices (pixel level explanations).
If `model_explainability`, `visualizations`, `attributions` are set to `True` in
> [!WARNING]
-> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
+> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring).
```json [
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Also try automated machine learning for these other model types:
* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
-* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
+* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
## Sign in to the studio
Before you configure your experiment, upload your data file to your workspace in
1. Select **Upload files** from the **Upload** drop-down..
- 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
+ 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv).
1. Select **Next**
machine-learning Tutorial Enable Materialization Backfill Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-materialization-backfill-data.md
Title: "Tutorial #2: enable materialization and backfill feature data (preview)"-
-description: Managed Feature Store tutorial part 2.
+ Title: "Tutorial 2: Enable materialization and backfill feature data (preview)"
+
+description: This is part 2 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #2: Enable materialization and backfill feature data (preview)
+# Tutorial 2: Enable materialization and backfill feature data (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. This tutorial describes materialization, which computes the feature values for a given feature window, and then stores those values in a materialization store. All feature queries can then use the values from the materialization store. A feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This works well for the prototyping phase. However, for training and inference operations in a production environment, it's recommended that you materialize the features, for greater reliability and availability.
+This tutorial is the second part of a four-part series. The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. This tutorial describes materialization.
-This tutorial is part two of a four part series. In this tutorial, you'll learn how to:
+Materialization computes the feature values for a feature window and then stores those values in a materialization store. All feature queries can then use the values from the materialization store.
+
+Without materialization, a feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This process works well for the prototyping phase. However, for training and inference operations in a production environment, we recommend that you materialize the features for greater reliability and availability.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Enable offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user assigned managed identity
-> * Enable offline materialization on the feature sets, and backfill the feature data
+> * Enable an offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user-assigned managed identity (UAI).
+> * Enable offline materialization on the feature sets, and backfill the feature data.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
-Before you proceed with this article, make sure you cover these prerequisites:
-
-* Complete the part 1 tutorial, to create the required feature store, account entity and transaction feature set
-* An Azure Resource group, where you (or the service principal you use) have `User Access Administrator`and `Contributor` roles.
+Before you proceed with this tutorial, be sure to cover these prerequisites:
-To proceed with this article, your user account needs the owner role or contributor role for the resource group that holds the created feature store.
+* Completion of [Tutorial 1: Develop and register a feature set with managed feature store](tutorial-get-started-with-feature-store.md), to create the required feature store, account entity, and `transactions` feature set.
+* An Azure resource group, where you (or the service principal that you use) have User Access Administrator and Contributor roles.
+* On your user account, the Owner or Contributor role for the resource group that holds the created feature store.
## Set up This list summarizes the required setup steps:
-1. In your project workspace, create an Azure Machine Learning compute resource, to run the training pipeline
-1. In your feature store workspace, create an offline materialization store: create an Azure gen2 storage account and a container inside it, and attach it to the feature store. Optional: you can use an existing storage container
-1. Create and assign a user-assigned managed identity to the feature store. Optionally, you can use an existing managed identity. The system managed materialization jobs - in other words, the recurrent jobs - use the managed identity. Part 3 of the tutorial relies on this
-1. Grant required role-based authentication control (RBAC) permissions to the user-assigned managed identity
-1. Grant required role-based authentication control (RBAC) to your Azure AD identity. Users, including yourself, need read access to the sources and the materialization store
+1. In your project workspace, create an Azure Machine Learning compute resource to run the training pipeline.
+1. In your feature store workspace, create an offline materialization store. Create an Azure Data Lake Storage Gen2 account and a container inside it, and attach it to the feature store. Optionally, you can use an existing storage container.
+1. Create and assign a UAI to the feature store. Optionally, you can use an existing managed identity. The system-managed materialization jobs - in other words, the recurrent jobs - use the managed identity. The third tutorial in the series relies on it.
+1. Grant required role-based access control (RBAC) permissions to the UAI.
+1. Grant required RBAC permissions to your Azure Active Directory (Azure AD) identity. Users, including you, need read access to the sources and the materialization store.
-### Configure the Azure Machine Learning spark notebook
+### Configure the Azure Machine Learning Spark notebook
-1. Running the tutorial:
+You can create a new notebook and execute the instructions in this tutorial step by step. You can also open the existing notebook named *2. Enable materialization and backfill feature data.ipynb* from the *featurestore_sample/notebooks* directory, and then run it. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- You can create a new notebook, and execute the instructions in this document, step by step. You can also open the existing notebook named `2. Enable materialization and backfill feature data.ipynb`, and then run it. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
-
-1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav.
+1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
1. Configure the session:
- * Select "configure session" in the bottom nav
- * Select **upload conda file**
- * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
- * Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ 1. On the toolbar, select **Configure session**.
+ 1. On the **Python packages** tab, select **Upload Conda file**.
+ 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Increase the session time-out (idle time) to avoid frequent prerequisite reruns.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=start-spark-session)]
This list summarizes the required setup steps:
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=root-dir)]
- 1. Set up the CLI
+1. Set up the CLI.
+
+ # [Python SDK](#tab/python)
+
+ Not applicable.
+
+ # [Azure CLI](#tab/cli)
+
+ 1. Install the Azure Machine Learning extension.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
+
+ 1. Authenticate.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)]
+
+ 1. Set the default subscription.
+
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
- # [Python SDK](#tab/python)
-
- Not applicable
-
- # [Azure CLI](#tab/cli)
-
- 1. Install the Azure Machine Learning extension
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)]
-
- 1. Authentication
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)]
-
- 1. Set the default subscription
-
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)]
-
-1. Initialize the project workspace properties
+1. Initialize the project workspace properties.
This is the current workspace. You'll run the tutorial notebook from this workspace. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)]
-1. Initialize the feature store properties
+1. Initialize the feature store properties.
- Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial.
+ Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)]
-1. Initialize the feature store core SDK client
+1. Initialize the feature store core SDK client.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)]
-1. Set up the offline materialization store
+1. Set up the offline materialization store.
- You can create a new gen2 storage account and a container. You can also reuse an existing gen2 storage account and container as the offline materialization store for the feature store.
+ You can create a new storage account and a container. You can also reuse an existing storage account and container as the offline materialization store for the feature store.
# [Python SDK](#tab/python)
This list summarizes the required setup steps:
# [Azure CLI](#tab/cli)
- Not applicable
+ Not applicable.
-## Set values for the Azure Data Lake Storage Gen2 storage
+## Set values for Azure Data Lake Storage Gen2 storage
- The materialization store uses these values. You can optionally override the default settings.
+The materialization store uses these values. You can optionally override the default settings.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)]
-1. Storage containers
+1. Create storage containers.
- Option 1: create new storage and container resources
+ The first option is to create new storage and container resources.
# [Python SDK](#tab/python)
This list summarizes the required setup steps:
- Option 2: reuse an existing storage container
+ The second option is to reuse an existing storage container.
# [Python SDK](#tab/python)
-
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
-
+ # [Azure CLI](#tab/cli)
-
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]
-
+
-1. Set up user assigned managed identity (UAI)
+1. Set up a UAI.
- The system-managed materialization jobs will use the UAI. For example, the recurrent job in part 3 of this tutorial uses this UAI.
+ The system-managed materialization jobs will use the UAI. For example, the recurrent job in the third tutorial uses this UAI.
### Set the UAI values
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
-
-### User assigned managed identity (option 1)
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)]
- Create a new one
+### Set up a UAI
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
+The first option is to create a new managed identity.
-### User assigned managed identity (option 2)
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)]
- Reuse an existing managed identity
+The second option is to reuse an existing managed identity.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)]
### Retrieve UAI properties
- Run this code sample in the SDK to retrieve the UAI properties:
+Run this code sample in the SDK to retrieve the UAI properties.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)]
-
+
-## Grant RBAC permission to the user assigned managed identity (UAI)
+## Grant RBAC permission to the UAI
- This UAI is assigned to the feature store shortly. It requires these permissions:
+This UAI is assigned to the feature store shortly. It requires these permissions:
- | **Scope** | **Action/Role** |
- ||--|
- | Feature Store | Azure Machine Learning Data Scientist role |
- | Storage account of feature store offline store | Blob storage data contributor role |
- | Storage accounts of source data | Blob storage data reader role |
+| Scope | Role |
+||--|
+| Feature store | Azure Machine Learning Data Scientist role |
+| Storage account of the offline store on the feature store | Storage Blob Data Contributor role |
+| Storage accounts of the source data | Storage Blob Data Reader role |
- The next CLI commands will assign the first two roles to the UAI. In this example, "Storage accounts of source data" doesn't apply because we read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see the [access control document]() in the documentation resources.
+The next CLI commands assign the first two roles to the UAI. In this example, the "storage accounts of the source data" scope doesn't apply because you read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
- # [Python SDK](#tab/python)
+# [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)]
- # [Azure CLI](#tab/cli)
+# [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)]
-
+
-### Grant the blob data reader role access to your user account in the offline store
+### Grant the Storage Blob Data Reader role access to your user account in the offline store
- If the feature data is materialized, you need this role to read feature data from the offline materialization store.
+If the feature data is materialized, you need the Storage Blob Data Reader role to read feature data from the offline materialization store.
- Obtain your Azure AD object ID value from the Azure portal as described [here](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
+Obtain your Azure AD object ID value from the Azure portal, as described in [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id).
- To learn more about access control, see the [access control document](./how-to-setup-access-control-feature-store.md).
+To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)]
- The following steps grant the blob data reader role access to your user account.
+The following steps grant the Storage Blob Data Reader role access to your user account:
- 1. Attach the offline materialization store and UAI, to enable the offline store on the feature store
+1. Attach the offline materialization store and UAI, to enable the offline store on the feature store.
# [Python SDK](#tab/python)
This list summarizes the required setup steps:
# [Azure CLI](#tab/cli)
- Action: inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store.
+ Inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)]
This list summarizes the required setup steps:
- 2. Enable offline materialization on the transactions feature set
+2. Enable offline materialization on the `transactions` feature set.
- Once materialization is enabled on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. See [part 3](./tutorial-experiment-train-models-using-features.md) of this tutorial series for more information.
+ After you enable materialization on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. For more information, see [the third tutorial in the series](./tutorial-experiment-train-models-using-features.md).
- # [Python SDK](#tab/python)
+ # [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)]
-
+
- Optional: you can save the feature set asset as a YAML resource
+ Optionally, you can save the feature set asset as a YAML resource.
- # [Python SDK](#tab/python)
+ # [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)]
- # [Azure CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- Not applicable
+ Not applicable.
-
+
- 3. Backfill data for the transactions feature set
+3. Backfill data for the `transactions` feature set.
- As explained earlier in this tutorial, materialization computes the feature values for a given feature window, and stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill, for a feature window of three months.
+ As explained earlier in this tutorial, materialization computes the feature values for a feature window, and it stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill for a feature window of three months.
> [!NOTE]
- > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two year window.
+ > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two-year window.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)]
- We'll print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data, and it also uses the materialization store by default.
+ Next, print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data. It also uses the materialization store by default.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)]
-## Cleanup
+## Clean up
-The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
+The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources.
## Next steps
-* [Part 3: tutorial features and the machine learning lifecycle](./tutorial-experiment-train-models-using-features.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Go to the next tutorial in the series: [Experiment and train models by using features](./tutorial-experiment-train-models-using-features.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
Title: "Tutorial #4: enable recurrent materialization and run batch inference (preview)"-
-description: Managed Feature Store tutorial part 4
+ Title: "Tutorial 4: Enable recurrent materialization and run batch inference (preview)"
+
+description: This is part 4 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #4: Enable recurrent materialization and run batch inference (preview)
+# Tutorial 4: Enable recurrent materialization and run batch inference (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Part 3 of this tutorial showed how to experiment with features, as a way to improve model performance. Part 3 also showed how a feature store increases agility in the experimentation and training flows. Tutorial 4 explains how to
+The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. The second tutorial showed how to enable materialization and perform a backfill. The third tutorial showed how to experiment with features as a way to improve model performance. It also showed how a feature store increases agility in the experimentation and training flows.
+
+This tutorial explains how to:
> [!div class="checklist"]
-> * Run batch inference for the registered model
-> * Enable recurrent materialization for the `transactions` feature set
-> * Run a batch inference pipeline on the registered model
+> * Run batch inference for the registered model.
+> * Enable recurrent materialization for the `transactions` feature set.
+> * Run a batch inference pipeline on the registered model.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
-Before you proceed with this article, make sure you complete parts 1, 2, and 3 of this tutorial series.
+Before you proceed with the following procedures, be sure to complete the first, second, and third tutorials in the series.
## Set up
-### Configure the Azure Machine Learning spark notebook
-
- 1. In the "Compute" dropdown in the top nav, select "Configure session"
-
- To run this tutorial, you can create a new notebook, and execute the instructions in this document, step by step. You can also open and run the existing notebook named `4. Enable recurrent materialization and run batch inference`. You can find that notebook, and all the notebooks in this series, at the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
+1. Configure the Azure Machine Learning Spark notebook.
- 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav.
+ To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *4. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- 1. Configure session:
+ 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
- * Select "configure session" in the bottom nav
- * Select **upload conda file**
- * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
- * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ 1. Configure the session:
+
+ 1. When the toolbar displays **Configure session**, select it.
+ 1. On the **Python packages** tab, select **Upload conda file**.
+ 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
-### Start the spark session
+ 1. Start the Spark session.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)]
-### Set up the root directory for the samples
+ 1. Set up the root directory for the samples.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)]
- ### [Python SDK](#tab/python)
+ ### [Python SDK](#tab/python)
- Not applicable
+ Not applicable.
- ### [Azure CLI](#tab/cli)
+ ### [Azure CLI](#tab/cli)
- **Set up the CLI**
+ Set up the CLI:
- 1. Install the Azure Machine Learning extension
+ 1. Install the Azure Machine Learning extension.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
- 1. Authentication
+ 1. Authenticate.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
- 1. Set the default subscription
+ 1. Set the default subscription.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
-
+
-1. Initialize the project workspace CRUD client
+1. Initialize the project workspace CRUD (create, read, update, and delete) client.
- The tutorial notebook runs from this current workspace
+ The tutorial notebook runs from this current workspace.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-ws-crud-client)]
-1. Initialize the feature store variables
+1. Initialize the feature store variables.
- Make sure that you update the `featurestore_name` value, to reflect what you created in part 1 of this tutorial.
+ Be sure to update the `featurestore_name` value, to reflect what you created in the first tutorial.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-crud-client)]
-1. Initialize the feature store SDK client
+1. Initialize the feature store SDK client.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-core-sdk)]
-## Enable recurrent materialization on the `transactions` feature set
+## Enable recurrent materialization on the transactions feature set
-We enabled materialization in tutorial part 2, and we also performed backfill on the transactions feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store. However, to handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up-to-date. These jobs run on user-defined schedules. The recurrent job schedule works this way:
+In the second tutorial, you enabled materialization and performed backfill on the `transactions` feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store.
-* Interval and frequency values define a window. For example, values of
+To handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up to date. These jobs run on user-defined schedules. The recurrent job schedule works this way:
- * interval = 3
- * frequency = Hour
+* Interval and frequency values define a window. For example, the following values define a three-hour window:
- define a three-hour window.
+ * `interval` = `3`
+ * `frequency` = `Hour`
-* The first window starts at the start_time defined in the RecurrenceTrigger, and so on.
+* The first window starts at the `start_time` value defined in `RecurrenceTrigger`, and so on.
* The first recurrent job is submitted at the start of the next window after the update time.
-* Later recurrent jobs will be submitted at every window after the first job.
+* Later recurrent jobs are submitted at every window after the first job.
-As explained in earlier parts of this tutorial, once data is materialized (backfill / recurrent materialization), feature retrieval uses the materialized data by default.
+As explained in earlier tutorials, after data is materialized (backfill or recurrent materialization), feature retrieval uses the materialized data by default.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=enable-recurrent-mat-txns-fset)]
-## (Optional) Save the feature set asset yaml file
+## (Optional) Save the YAML file for the feature set asset
- We use the updated settings to save the yaml file
+You use the updated settings to save the YAML file.
- ### [Python SDK](#tab/python)
+### [Python SDK](#tab/python)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)]
- ### [Azure CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
- Not applicable
+Not applicable.
-
++
+## Run the batch inference pipeline
-## Run the batch-inference pipeline
+The batch inference has these steps:
- The batch-inference has these steps:
+1. You use the same built-in feature retrieval component for feature retrieval that you used in the training pipeline (covered in the third tutorial). For pipeline training, you provided a feature retrieval specification as a component input. For batch inference, you pass the registered model as the input. The component looks for the feature retrieval specification in the model artifact.
- 1. Feature retrieval: this uses the same built-in feature retrieval component used in the training pipeline, covered in tutorial part 3. For pipeline training, we provided a feature retrieval spec as a component input. However, for batch inference, we pass the registered model as the input, and the component looks for the feature retrieval spec in the model artifact.
-
- Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features, and outputs the data for batch inference.
+ Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features and outputs the data for batch inference.
- 1. Batch inference: This step uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output.
+1. The pipeline uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output.
> [!NOTE]
- > We use a job for batch inference in this example. You can also use Azure ML's batch endpoints.
+ > You use a job for batch inference in this example. You can also use batch endpoints in Azure Machine Learning.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=run-batch-inf-pipeline)]
- ### Inspect the batch inference output data
+### Inspect the output data for batch inference
+
+In the pipeline view:
- In the pipeline view
- 1. Select `inference_step` in the `outputs` card
- 1. Copy the Data field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`
- 1. Paste the Data field value in the following cell, with separate name and version values (note that the last character is the version, preceded by a `:`).
- 1. Note the `predict_is_fraud` column that the batch inference pipeline generated
+1. Select `inference_step` in the `outputs` card.
+1. Copy the `Data` field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`.
+1. Paste the `Data` field value in the following cell, with separate name and version values. The last character is the version, preceded by a colon (`:`).
+1. Note the `predict_is_fraud` column that the batch inference pipeline generated.
- Explanation: In the batch inference pipeline (`/project/fraud_mode/pipelines/batch_inference_pipeline.yaml`) outputs, since we didn't provide `name` or `version` values in the `outputs` of the `inference_step`, the system created an untracked data asset with a guid as the name value, and 1 as the version value. In this cell, we derive and then display the data path from the asset:
+ In the batch inference pipeline (*/project/fraud_mode/pipelines/batch_inference_pipeline.yaml*) outputs, because you didn't provide `name` or `version` values for `outputs` of `inference_step`, the system created an untracked data asset with a GUID as the name value and `1` as the version value. In this cell, you derive and then display the data path from the asset.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)]
-## Cleanup
+## Clean up
-If you created a resource group for the tutorial, you can delete the resource group, to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually:
+If you created a resource group for the tutorial, you can delete the resource group to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually:
-1. To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it
-1. Follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to delete the user-assigned managed identity
-1. To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage you created, and delete it
+- To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it.
+- To delete the user-assigned managed identity, follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+- To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage that you created, and delete it.
## Next steps
-* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Tutorial Experiment Train Models Using Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md
Title: "Tutorial #3: experiment and train models using features (preview)"-
-description: Managed Feature Store tutorial part 3.
+ Title: "Tutorial 3: Experiment and train models by using features (preview)"
+
+description: This is part 3 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #3: Experiment and train models using features (preview)
+# Tutorial 3: Experiment and train models by using features (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Tutorial 3 shows how to experiment with features, as a way to improve model performance. This tutorial also shows how a feature store increases agility in the experimentation and training flows. It shows how to:
+The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. The second tutorial showed how to enable materialization and perform a backfill.
+
+This tutorial shows how to experiment with features as a way to improve model performance. It also shows how a feature store increases agility in the experimentation and training flows.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Prototype a new `accounts` feature set spec, using existing precomputed values as features. Then, register the local feature set spec as a feature set in the feature store. This differs from tutorial part 1, where we created a feature set that had custom transformations
-> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature-retrieval spec
-> * Run a training pipeline that uses the feature retrieval spec to train a new model. This pipeline uses the built-in feature-retrieval component, to generate the training data
+> * Prototype a new `accounts` feature set specification, by using existing precomputed values as features. Then, register the local feature set specification as a feature set in the feature store. This process differs from the first tutorial, where you created a feature set that had custom transformations.
+> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature retrieval specification.
+> * Run a training pipeline that uses the feature retrieval specification to train a new model. This pipeline uses the built-in feature retrieval component to generate the training data.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
-Before you proceed with this article, make sure you complete parts 1 and 2 of this tutorial series.
+Before you proceed with the following procedures, be sure to complete the first and second tutorials in the series.
## Set up
-1. Configure the Azure Machine Learning spark notebook
+1. Configure the Azure Machine Learning Spark notebook.
- 1. Running the tutorial: You can create a new notebook, and execute the instructions in this document step by step. You can also open and run existing notebook `3. Experiment and train models using features.ipynb`. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation.
+ You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook named *3. Experiment and train models using features.ipynb* from the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation.
- 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. Wait for a status bar in the top to display "configure session".
+ 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**.
1. Configure the session:
- * Select "configure session" in the bottom nav
- * Select **upload conda file**
- * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development)
- * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns
+ 1. When the toolbar displays **Configure session**, select it.
+ 1. On the **Python packages** tab, select **Upload Conda file**.
+ 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment).
+ 1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns.
- 1. Start the spark session
+ 1. Start the Spark session.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=start-spark-session)]
- 1. Set up the root directory for the samples
+ 1. Set up the root directory for the samples.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=root-dir)] ### [Python SDK](#tab/python)
-
- Not applicable
-
+
+ Not applicable.
+ ### [Azure CLI](#tab/cli)
-
- Set up the CLI
-
- 1. Install the Azure Machine Learning extension
-
+
+ Set up the CLI:
+
+ 1. Install the Azure Machine Learning extension.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=install-ml-ext-cli)]
-
- 1. Authentication
-
+
+ 1. Authenticate.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=auth-cli)]
-
- 1. Set the default subscription
-
+
+ 1. Set the default subscription.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=set-default-subs-cli)]
-
+
-1. Initialize the project workspace variables
+1. Initialize the project workspace variables.
This is the current workspace, and the tutorial notebook runs in this resource. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-ws-crud-client)]
-1. Initialize the feature store variables
+1. Initialize the feature store variables.
- Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial.
+ Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-crud-client)]
-1. Initialize the feature store consumption client
+1. Initialize the feature store consumption client.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-core-sdk)]
-1. Create a compute cluster
+1. Create a compute cluster named `cpu-cluster` in the project workspace.
- We'll create a compute cluster named `cpu-cluster` in the project workspace. We need this compute cluster when we run the training / batch inference jobs.
+ You'll need this compute cluster when you run the training/batch inference jobs.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-compute-cluster)]
-## Create the accounts feature set locally
+## Create the account feature set locally
+
+In the first tutorial, you created a `transactions` feature set that had custom transformations. Here, you create an `accounts` feature set that uses precomputed values.
-In tutorial part 1, we created a transactions feature set that had custom transformations. Here, we create an accounts feature set that uses precomputed values.
+To onboard precomputed features, you can create a feature set specification without writing any transformation code. You use a feature set specification to develop and test a feature set in a fully local development environment.
-To onboard precomputed features, you can create a feature set spec without writing any transformation code. A feature set spec is a specification that we use to develop and test a feature set, in a fully local development environment. We don't need to connect to a feature store. In this step, you create the feature set spec locally, and then sample the values from it. For managed feature store capabilities, you must use a feature asset definition to register the feature set spec with a feature store. Later steps in this tutorial provide more details.
+You don't need to connect to a feature store. In this procedure, you create the feature set specification locally, and then sample the values from it. For capabilities of managed feature store, you must use a feature asset definition to register the feature set specification with a feature store. Later steps in this tutorial provide more details.
-1. Explore the source data for the accounts
+1. Explore the source data for the accounts.
> [!NOTE]
- > This notebook uses sample data hosted in a publicly-accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets using your own source data, please host those feature sets in an adls gen2 account, and use an `abfss` driver in the data path.
+ > This notebook uses sample data hosted in a publicly accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets by using your own source data, host those feature sets in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=explore-accts-fset-src-data)]
-1. Create the `accounts` feature set spec in local, from these precomputed features
+1. Create the `accounts` feature set specification locally, from these precomputed features.
- We don't need any transformation code here, because we reference precomputed features.
+ You don't need any transformation code here, because you reference precomputed features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-accts-fset-spec)]
-1. Export as a feature set spec
+1. Export as a feature set specification.
+
+ To register the feature set specification with the feature store, you must save the feature set specification in a specific format.
+
+ After you run the next cell, inspect the generated `accounts` feature set specification. To see the specification, open the *featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml* file from the file tree.
- To register the feature set spec with the feature store, you must save the feature set spec in a specific format.
+ The specification has these important elements:
- Action: After you run the next cell, inspect the generated `accounts` feature set spec. To see the spec, open the `featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml` file from the file tree to see the spec.
+ - `source`: A reference to a storage resource. In this case, it's a Parquet file in a blob storage resource.
- The spec has these important elements:
+ - `features`: A list of features and their datatypes. With provided transformation code (see the "Day 2" section), the code must return a DataFrame that maps to the features and datatypes. Without the provided transformation code, the system builds the query to map the features and datatypes to the source. In this case, the transformation code is the generated `accounts` feature set specification, because it's precomputed.
- 1. `source`: a reference to a storage resource, in this case, a parquet file in a blog storage resource
-
- 1. `features`: a list of features and their datatypes. With provided transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes. Without the provided transformation code (in this case, the generated `accounts` feature set spec, because it's precomputed), the system builds the query to map the features and datatypes to the source
-
- 1. `index_columns`: the join keys required to access values from the feature set
+ - `index_columns`: The join keys required to access values from the feature set.
- See the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-featureset-spec.md) to learn more.
+ To learn more, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and the [CLI (v2) feature set specification YAML schema](./reference-yaml-featureset-spec.md).
As an extra benefit, persisting supports source control.
- We don't need any transformation code here, because we reference precomputed features.
+ You don't need any transformation code here, because you reference precomputed features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=dump-accts-fset-spec)] ## Locally experiment with unregistered features
-As you develop features, you might want to locally test and validate them, before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`), and a feature set registered in the feature store (`transactions`), generates training data for the ML model.
+As you develop features, you might want to locally test and validate them before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`) and a feature set registered in the feature store (`transactions`) generates training data for the machine learning model.
-1. Select features for the model
+1. Select features for the model.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-unreg-features-for-model)]
-1. Locally generate training data
+1. Locally generate training data.
This step generates training data for illustrative purposes. As an option, you can locally train models here. Later steps in this tutorial explain how to train a model in the cloud. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=gen-training-data-locally)]
-1. Register the `accounts` feature set with the feature store
+1. Register the `accounts` feature set with the feature store.
- After you locally experiment with different feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store.
+ After you locally experiment with feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=reg-accts-fset)]
-1. Get the registered feature set, and sanity test it
+1. Get the registered feature set and test it.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=sample-accts-fset-data)] ## Run a training experiment
-In this step, you select a list of features, run a training pipeline, and register the model. You can repeat this step until the model performs as you'd like.
+In the following steps, you select a list of features, run a training pipeline, and register the model. You can repeat these steps until the model performs as you want.
-1. (Optional) Discover features from the feature store UI
+1. Optionally, discover features from the feature store UI.
- Part 1 of this tutorial covered this, when you registered the transactions feature set. Since you also have an accounts feature set, you can browse the available features:
+ The first tutorial covered this step, when you registered the `transactions` feature set. Because you also have an `accounts` feature set, you can browse through the available features:
- * Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStores).
- * In the left nav, select `feature stores`
- * The list of feature stores that you can access appears. Select the feature store that you created earlier.
+ 1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home).
+ 1. On the left pane, select **Feature stores**.
+ 1. In the list of feature stores, select the feature store that you created earlier.
- You can see the feature sets and entity that you created. Select the feature sets to browse the feature definitions. You can use the global search box to search for feature sets across feature stores.
+ The UI shows the feature sets and entity that you created. Select the feature sets to browse through the feature definitions. You can use the global search box to search for feature sets across feature stores.
-1. (Optional) Discover features from the SDK
+1. Optionally, discover features from the SDK.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=discover-features-from-sdk)]
-1. Select features for the model, and export the model as a feature-retrieval spec
+1. Select features for the model, and export the model as a feature retrieval specification.
- In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model shipping agility increases if you save the selected features as a feature-retrieval spec, and use the spec in the mlops/cicd flow for training and inference.
+ In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model-shipping agility increases if you save the selected features as a feature retrieval specification, and then use the specification in the machine learning operations (MLOps) or continuous integration and continuous delivery (CI/CD) flow for training and inference.
-1. Select features for the model
+1. Select features for the model.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-reg-features)]
-1. Export selected features as a feature-retrieval spec
+1. Export selected features as a feature retrieval specification.
- > [!NOTE]
- > A **feature retrieval spec** is a portable definition of the feature list associated with a model. It can help streamline ML model development and operationalization. It will become an input to the training pipeline which generates the training data. Then, it will be packaged with the model. The inference phase uses it to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy.
+ A feature retrieval specification is a portable definition of the feature list that's associated with a model. It can help streamline the development and operationalization of a machine learning model. It will become an input to the training pipeline that generates the training data. Then, it will be packaged with the model.
+
+ The inference phase uses the feature retrieval to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy.
- Use of the feature retrieval spec and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the spec should be **feature_retrieval_spec.yaml** when it's packaged with the model. This way, the system can recognize it.
+ Use of the feature retrieval specification and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the specification should be *feature_retrieval_spec.yaml* when it's packaged with the model. This way, the system can recognize it.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=export-as-frspec)] ## Train in the cloud with pipelines, and register the model
-In this step, you manually trigger the training pipeline. In a production scenario, a ci/cd pipeline could trigger it, based on changes to the feature-retrieval spec in the source repository. You can register the model if it's satisfactory.
+In this procedure, you manually trigger the training pipeline. In a production scenario, a CI/CD pipeline could trigger it, based on changes to the feature retrieval specification in the source repository. You can register the model if it's satisfactory.
-1. Run the training pipeline
+1. Run the training pipeline.
The training pipeline has these steps:
- 1. Feature retrieval: For its input, this built-in component takes the feature retrieval spec, the observation data, and the timestamp column name. It then generates the training data as output. It runs these steps as a managed spark job.
-
- 1. Training: Based on the training data, this step trains the model, and then generates a model (not yet registered)
-
- 1. Evaluation: This step validates whether or not the model performance and quality fall within a threshold (in our case, it's a placeholder/dummy step for illustration purposes)
-
- 1. Register the model: This step registers the model
+ 1. Feature retrieval: For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job.
+
+ 1. Training: Based on the training data, this step trains the model and then generates a model (not yet registered).
+
+ 1. Evaluation: This step validates whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.)
+
+ 1. Register the model: This step registers the model.
> [!NOTE]
- > In part 2 of this tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior will be the same, even if you use the `get_offline_features()` API.
+ > In the second tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior is the same, even if you use the `get_offline_features()` API.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=run-training-pipeline)]
- 1. Inspect the training pipeline and the model
+ 1. Inspect the training pipeline and the model.
- 1. Open the above pipeline, and run "web view" in a new window to see the pipeline steps.
+ 1. Open the pipeline. Run the web view in a new window to display the pipeline steps.
-1. Use the feature retrieval spec in the model artifacts
+1. Use the feature retrieval specification in the model artifacts:
- 1. In the left nav of the current workspace, select `Models`
- 1. Select open in a new tab or window
- 1. Select **fraud_model**
- 1. In the top nav, select Artifacts
+ 1. On the left pane of the current workspace, select **Models**.
+ 1. Select **Open in a new tab or window**.
+ 1. Select **fraud_model**.
+ 1. Select **Artifacts**.
- The feature retrieval spec is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval spec during experimentation. Now it became part of the model definition. In the next tutorial, you'll see how inferencing uses it.
+ The feature retrieval specification is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval specification during experimentation. Now it's part of the model definition. In the next tutorial, you'll see how inferencing uses it.
## View the feature set and model dependencies
-1. View the list of feature sets associated with the model
+1. View the list of feature sets associated with the model.
- In the same models page, select the `feature sets` tab. This tab shows both the `transactions` and the `accounts` feature sets on which this model depends.
+ On the same **Models** page, select the **Feature sets** tab. This tab shows both the `transactions` and `accounts` feature sets on which this model depends.
-1. View the list of models that use the feature sets
+1. View the list of models that use the feature sets:
- 1. Open the feature store UI (explained earlier in this tutorial)
- 1. Select `Feature sets` on the left nav
- 1. Select a feature set
- 1. Select the `Models` tab
+ 1. Open the feature store UI (explained earlier in this tutorial).
+ 1. On the left pane, select **Feature sets**.
+ 1. Select a feature set.
+ 1. Select the **Models** tab.
- You can see the list of models that use the feature sets. The feature retrieval spec determined this list when the model was registered.
+ The feature retrieval specification determined this list when the model was registered.
-## Cleanup
+## Clean up
-The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
+The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources.
## Next steps
-* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Go to the next tutorial in the series: [Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md).
+* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Title: "Tutorial #1: develop and register a feature set with managed feature store (preview)"-
-description: Managed Feature Store tutorial part 1.
+ Title: "Tutorial 1: Develop and register a feature set with managed feature store (preview)"
+
+description: This is part 1 of a tutorial series on managed feature store.
#Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
-# Tutorial #1: develop and register a feature set with managed feature store (preview)
+# Tutorial 1: Develop and register a feature set with managed feature store (preview)
-This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization.
+This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization.
-Azure Machine Learning managed feature store lets you discover, create and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic feature store concepts, see [what is managed feature store](./concept-what-is-managed-feature-store.md) and [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+You can use Azure Machine Learning managed feature store to discover, create, and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic concepts for managed feature store, see [What is managed feature store?](./concept-what-is-managed-feature-store.md) and [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
-This tutorial is the first part of a four part series. Here, you'll learn how to:
+This tutorial is the first part of a four-part series. Here, you learn how to:
> [!div class="checklist"]
-> * Create a new minimal feature store resource
-> * Develop and locally test a feature set with feature transformation capability
-> * Register a feature store entity with the feature store
-> * Register the feature set that you developed with the feature store
-> * Generate a sample training dataframe using the features you created
+> * Create a new, minimal feature store resource.
+> * Develop and locally test a feature set with feature transformation capability.
+> * Register a feature store entity with the feature store.
+> * Register the feature set that you developed with the feature store.
+> * Generate a sample training DataFrame by using the features that you created.
-> [!IMPORTANT]
-> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported, or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This tutorial series has two tracks:
-## Prerequisites
-
-> [!NOTE]
-> This tutorial series has two tracks:
-> * SDK only track: Uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
-> * SDK & CLI track: This track uses the CLI for CRUD operations (create, update, and delete), and the Python SDK for feature set development and testing only. This is useful in CI / CD, or GitOps, scenarios, where CLI/yaml is preferred.
+* The SDK-only track uses only Python SDKs. Choose this track for pure, Python-based development and deployment.
+* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD (create, read, update, and delete) operations. This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred.
-Before you proceed with this article, make sure you cover these prerequisites:
-* An Azure Machine Learning workspace. See [Quickstart: Create workspace resources](./quickstart-create-resources.md) article for more information about workspace creation.
+## Prerequisites
-* To proceed with this article, your user account must be assigned the owner or contributor role to the resource group where the feature store is created
+Before you proceed with this tutorial, be sure to cover these prerequisites:
- (Optional): If you use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group
+* An Azure Machine Learning workspace. For more information about workspace creation, see [Quickstart: Create workspace resources](./quickstart-create-resources.md).
-## Set up
+* On your user account, the Owner or Contributor role for the resource group where the feature store is created.
-### Prepare the notebook environment for development
+ If you choose to use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group.
-> [!NOTE]
-> This tutorial uses an Azure Machine Learning Spark notebook for development.
+## Prepare the notebook environment
-1. In the Azure Machine Learning studio environment, first select **Notebooks** in the left nav, and then select the **Samples** tab. Navigate to the **featurestore_sample** directory
+This tutorial uses an Azure Machine Learning Spark notebook for development.
- **Samples -> SDK v2 -> sdk -> python -> featurestore_sample**
+1. In the Azure Machine Learning studio environment, select **Notebooks** on the left pane, and then select the **Samples** tab.
- and then select **Clone**, as shown in this screenshot:
+1. Browse to the *featurestore_sample* directory (select **Samples** > **SDK v2** > **sdk** > **python** > **featurestore_sample**), and then select **Clone**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot showing selection of the featurestore_sample directory in Azure Machine Learning studio UI.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot that shows selection of the sample directory in Azure Machine Learning studio.":::
-1. The **Select target directory** panel opens next. Select the User directory, in this case **testUser**, and then select **Clone**, as shown in this screenshot:
+1. The **Select target directory** panel opens. Select the user directory (in this case, **testUser**), and then select **Clone**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot showing selection of the target directory location in Azure Machine Learning studio UI for the featurestore_sample resource.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot that shows selection of the target directory location in Azure Machine Learning studio for the sample resource.":::
-1. To configure the notebook environment, you must upload the **conda.yml** file. Select **Notebooks** in the left nav, and then select the **Files** tab. Navigate to the **env** directory
+1. To configure the notebook environment, you must upload the *conda.yml* file:
- **Users -> testUser -> featurestore_sample -> project -> env**
+ 1. Select **Notebooks** on the left pane, and then select the **Files** tab.
+ 1. Browse to the *env* directory (select **Users** > **testUser** > **featurestore_sample** > **project** > **env**), and then select the *conda.yml* file. In this path, *testUser* is the user directory.
+ 1. Select **Download**.
- and select the **conda.yml** file. In this navigation, **testUser** is the user directory. Select **Download**, as shown in this screenshot:
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot that shows selection of the Conda YAML file in Azure Machine Learning studio.":::
- :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot showing selection of the conda.yml file in Azure Machine Learning studio UI.":::
+1. In the Azure Machine Learning environment, open the notebook, and then select **Configure session**.
-1. At the Azure Machine Learning environment, open the notebook, and select **Configure Session**, as shown in this screenshot:
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot that shows selections for configuring a session for a notebook.":::
- :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot showing Open Configure Session for this notebook.":::
+1. On the **Configure session** panel, select **Python packages**.
-1. At the **Configure Session** panel, select **Python packages**. To upload the Conda file, select **Upload Conda file**, and **Browse** to the directory that hosts the Conda file. Select **conda.yml**, and then select **Open**, as shown in this screenshot:
+1. Upload the Conda file:
+ 1. On the **Python packages** tab, select **Upload Conda file**.
+ 1. Browse to the directory that hosts the Conda file.
+ 1. Select **conda.yml**, and then select **Open**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot showing the directory hosting the Conda file.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot that shows the directory that hosts the Conda file.":::
-1. Select **Apply**, as shown in this screenshot:
+1. Select **Apply**.
- :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot showing the Conda file upload.":::
+ :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot that shows the Conda file upload.":::
## Start the Spark session
Before you proceed with this article, make sure you cover these prerequisites:
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)]
-### [SDK Track](#tab/SDK-track)
+### [SDK track](#tab/SDK-track)
-Not applicable
+Not applicable.
-### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
### Set up the CLI
-1. Install the Azure Machine Learning extension
+1. Install the Azure Machine Learning extension.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
-1. Authentication
+1. Authenticate.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
-1. Set the default subscription
+1. Set the default subscription.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)] > [!NOTE]
-> Feature store Vs Project workspace: You'll use a feature store to reuse features across projects. You'll use a project workspace (an Azure Machine Learning workspace) to train and inference models, by leveraging features from feature stores. Many project workspaces can share and reuse the same feature store.
+> You use a feature store to reuse features across projects. You use a project workspace (an Azure Machine Learning workspace) to train inference models, by taking advantage of features from feature stores. Many project workspaces can share and reuse the same feature store.
-### [SDK Track](#tab/SDK-track)
+### [SDK track](#tab/SDK-track)
This tutorial uses two SDKs:
-* The Feature Store CRUD SDK
-* You use the same MLClient (package name azure-ai-ml) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for feature store CRUD operations for feature store, feature set, and feature store entity.
-* The feature store core SDK
-
- This SDK (azureml-featurestore) is intended for feature set development and consumption. Later steps in this tutorial describe these operations:
-
- * Feature set specification development
- * Feature data retrieval
- * List and Get registered feature sets
- * Generate and resolve feature retrieval specs
- * Generate training and inference data using point-in-time joins
+* *Feature store CRUD SDK*
+
+ You use the same `MLClient` (package name `azure-ai-ml`) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for CRUD operations for feature stores, feature sets, and feature store entities.
+
+* *Feature store core SDK*
+
+ This SDK (`azureml-featurestore`) is for feature set development and consumption. Later steps in this tutorial describe these operations:
+
+ * Develop a feature set specification.
+ * Retrieve feature data.
+ * List or get a registered feature set.
+ * Generate and resolve feature retrieval specifications.
+ * Generate training and inference data by using point-in-time joins.
+
+This tutorial doesn't require explicit installation of those SDKs, because the earlier Conda YAML instructions cover this step.
-This tutorial doesn't require explicit installation of those SDKs, because the earlier **conda YAML** instructions cover this step.
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
-### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+This tutorial uses both the feature store core SDK and the CLI for CRUD operations. It uses the Python SDK only for feature set development and testing. This approach is useful for GitOps or CI/CD scenarios, where CLI/YAML is preferred.
-This tutorial uses both the Feature store core SDK, and the CLI, for CRUD operations. It only uses the Python SDK for Feature set development and testing. This approach is useful for GitOps or CI / CD scenarios, where CLI / yaml is preferred.
+Here are general guidelines:
-* Use the CLI for CRUD operations on feature store, feature set, and feature store entities
-* Feature store core SDK: This SDK (`azureml-featurestore`) is meant for feature set development and consumption. This tutorial covers these operations:
+* Use the CLI for CRUD operations on feature stores, feature sets, and feature store entities.
+* The feature store core SDK (`azureml-featurestore`) is for feature set development and consumption. This tutorial covers these operations:
- * List / Get a registered feature set
- * Generate / resolve a feature retrieval spec
- * Execute a feature set definition, to generate a Spark dataframe
- * Generate training with a point-in-time join
+ * List or get a registered feature set
+ * Generate or resolve a feature retrieval specification
+ * Execute a feature set definition, to generate a Spark DataFrame
+ * Generate training by using point-in-time joins
-This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The **conda.yaml** file includes them in an earlier step.
+This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The *conda.yml* file includes them in an earlier step.
## Create a minimal feature store
-1. Set feature store parameters
-
- Set the name, location, and other values for the feature store
+1. Set feature store parameters, including name, location, and other values.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)]
-1. Create the feature store
+1. Create the feature store.
- ### [SDK Track](#tab/SDK-track)
+ ### [SDK track](#tab/SDK-track)
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)]
- ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+ ### [SDK and CLI track](#tab/SDK-and-CLI-track)
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
-1. Initialize an Azure Machine Learning feature store core SDK client
+1. Initialize a feature store core SDK client for Azure Machine Learning.
- As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features
+ As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)] ## Prototype and develop a feature set
-We'll build a feature set named `transactions` that has rolling, window aggregate-based features
+In the following steps, you build a feature set named `transactions` that has rolling, window aggregate-based features:
-1. Explore the transactions source data
+1. Explore the `transactions` source data.
- > [!NOTE]
- > This notebook uses sample data hosted in a publicly-accessible blob container. It can only be read into Spark with a `wasbs` driver. When you create feature sets using your own source data, host them in an adls gen2 account, and use an `abfss` driver in the data path.
+ This notebook uses sample data hosted in a publicly accessible blob container. It can be read into Spark only through a `wasbs` driver. When you create feature sets by using your own source data, host them in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)]
-1. Locally develop the feature set
-
- A feature set specification is a self-contained feature set definition that you can locally develop and test. Here, we create these rolling window aggregate features:
+1. Locally develop the feature set.
- * transactions three-day count
- * transactions amount three-day sum
- * transactions amount three-day avg
- * transactions seven-day count
- * transactions amount seven-day sum
- * transactions amount seven-day avg
+ A feature set specification is a self-contained definition of a feature set that you can locally develop and test. Here, you create these rolling window aggregate features:
- **Action:**
+ * `transactions three-day count`
+ * `transactions amount three-day sum`
+ * `transactions amount three-day avg`
+ * `transactions seven-day count`
+ * `transactions amount seven-day sum`
+ * `transactions amount seven-day avg`
- - Review the feature transformation code file: `featurestore/featuresets/transactions/transformation_code/transaction_transform.py`. Note the rolling aggregation defined for the features. This is a spark transformer.
+ Review the feature transformation code file: *featurestore/featuresets/transactions/transformation_code/transaction_transform.py*. Note the rolling aggregation defined for the features. This is a Spark transformer.
- See [feature store concepts](./concept-what-is-managed-feature-store.md) and **transformation concepts** to learn more about the feature set and transformations.
+ To learn more about the feature set and transformations, see [What is managed feature store?](./concept-what-is-managed-feature-store.md).
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)]
-1. Export as a feature set spec
+1. Export as a feature set specification.
+
+ To register the feature set specification with the feature store, you must save that specification in a specific format.
- To register the feature set spec with the feature store, you must save that spec in a specific format.
+ Review the generated `transactions` feature set specification. Open this file from the file tree to see the specification: *featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml*.
- **Action:** Review the generated `transactions` feature set spec: Open this file from the file tree to see the spec: `featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml`
+ The specification contains these elements:
- The spec contains these elements:
-
- 1. `source`: a reference to a storage resource. In this case, it's a parquet file in a blob storage resource.
- 1. `features`: a list of features and their datatypes. If you provide transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes.
- 1. `index_columns`: the join keys required to access values from the feature set
+ * `source`: A reference to a storage resource. In this case, it's a Parquet file in a blob storage resource.
+ * `features`: A list of features and their datatypes. If you provide transformation code (see the "Day 2" section), the code must return a DataFrame that maps to the features and datatypes.
+ * `index_columns`: The join keys required to access values from the feature set.
- To learn more about the spec, see [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-feature-set.md).
+ To learn more about the specification, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and [CLI (v2) feature set YAML schema](./reference-yaml-feature-set.md).
- Persisting the feature set spec offers another benefit: the feature set spec can be source controlled.
+ Persisting the feature set specification offers another benefit: the feature set specification can be source controlled.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)]
-## Register a feature-store entity
+## Register a feature store entity
+
+As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities include accounts and customers. Entities are typically created once and then reused across feature sets. To learn more, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+
+### [SDK track](#tab/SDK-track)
-As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities can include accounts, customers, etc. Entities are typically created once, and then reused across feature sets. To learn more, see [feature store concepts](./concept-top-level-entities-in-managed-feature-store.md).
+1. Initialize the feature store CRUD client.
- ### [SDK Track](#tab/SDK-track)
+ As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
- 1. Initialize the Feature Store CRUD client
+ In this code sample, the client is scoped at feature store level.
- As explained earlier in this tutorial, the MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at feature store level.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)]
+1. Register the `account` entity with the feature store.
- 1. Register the `account` entity with the feature store
+ Create an `account` entity that has the join key `accountID` of type `string`.
- Create an account entity that has the join key `accountID`, of type string.
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)]
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
- ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+1. Initialize the feature store CRUD client.
- 1. Initialize the Feature Store CRUD client
+ As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation.
- As explained earlier in this tutorial, MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID`, of type string.
+ In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID` of type `string`.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
-
+ ## Register the transaction feature set with the feature store
-First, register a feature set asset with the feature store. You can then reuse that asset, and easily share it. Feature set asset registration offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities.
+Use the following code to register a feature set asset with the feature store. You can then reuse that asset and easily share it. Registration of a feature set asset offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities.
- ### [SDK Track](#tab/SDK-track)
+### [SDK track](#tab/SDK-track)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)]
- ### [SDK and CLI Track](#tab/SDK-and-CLI-track)
+### [SDK and CLI track](#tab/SDK-and-CLI-track)
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
-
+ ## Explore the feature store UI
-* Open the [Azure Machine Learning global landing page](https://ml.azure.com/home).
-* Select `Feature stores` in the left nav
-* From this list of accessible feature stores, select the feature store you created earlier in this tutorial.
+Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse through the feature store:
-> [!NOTE]
-> Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse the feature store.
+1. Open the [Azure Machine Learning global landing page](https://ml.azure.com/home).
+1. Select **Feature stores** on the left pane.
+1. From the list of accessible feature stores, select the feature store that you created earlier in this tutorial.
-## Generate a training data dataframe using the registered feature set
+## Generate a training data DataFrame by using the registered feature set
-1. Load observation data
+1. Load observation data.
- Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource. Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Since we use it for training, it also has an appended target variable (**is_fraud**).
+ Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource.
+
+ Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Because you use it for training, it also has an appended target variable (**is_fraud**).
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)]
-1. Get the registered feature set, and list its features
+1. Get the registered feature set, and list its features.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)] [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)]
-1. Select features, and generate training data
-
- Here, we select the features that become part of the training data. Then, we use the feature store SDK to generate the training data itself.
+1. Select the features that become part of the training data. Then, use the feature store SDK to generate the training data itself.
[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)] A point-in-time join appends the features to the training data.
-This tutorial built the training data with features from feature store. Optional: you can save the training data to storage for later use, or you can run model training on it directly.
+This tutorial built the training data with features from the feature store. Optionally, you can save the training data to storage for later use, or you can run model training on it directly.
-## Cleanup
+## Clean up
-The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources
+The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources.
## Next steps
-* [Part 2: enable materialization and back fill feature data](./tutorial-enable-materialization-backfill-data.md)
-* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md)
-* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md)
-* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md)
-* Reference: [YAML reference](./reference-yaml-overview.md)
+* Go to the next tutorial in the series: [Enable materialization and backfill feature data](./tutorial-enable-materialization-backfill-data.md).
+* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md).
+* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md).
+* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md).
+* View the [YAML reference](./reference-yaml-overview.md).
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml.md
Classification is a common machine learning task. Classification is a type of su
The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML (v1)](../tutorial-first-experiment-automated-ml.md).
-See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
+See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn)
### Regression
Similar to classification, regression tasks are also a common supervised learnin
Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](how-to-auto-train-models-v1.md).
-See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
+See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization),
### Time-series forecasting
Advanced forecasting configuration includes:
* rolling window aggregate features
-See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
+See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb).
### Computer vision
See the [how-to (v1)](how-to-configure-auto-train.md#ensemble-configuration) for
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](../concept-onnx.md).
-See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
+See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms).
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](../how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html).
How-to articles provide additional detail into what functionality automated ML o
### Jupyter notebook samples
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml).
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml).
### Python SDK reference
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md
The table shows resulting feature engineering that occurs when window aggregatio
![target rolling window](../media/how-to-auto-train-forecast/target-roll.svg)
-View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
+View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
### Short series handling
mse = mean_squared_error(
rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) ```
-In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
### Prediction into the future
-The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features.
fitted_model.forecast_quantiles(
test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) ```
-You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
+You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example.
After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values.
The following diagram shows the workflow for the many models solution.
![Many models concept diagram](../media/how-to-auto-train-forecast/many-models.svg)
-The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
+The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example
```python from azureml.train.automl.runtime._many_models.many_models_parameters import ManyModelsTrainParameters
To further visualize this, the leaf levels of the hierarchy contain all the time
The hierarchical time series solution is built on top of the Many Models Solution and share a similar configuration setup.
-The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
+The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example.
```python
hts_parameters = HTSTrainParameters(
## Example notebooks
-See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including:
+See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including:
-* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
-* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
-* [configurable lags](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
-* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* [configurable lags](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
## Next steps
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md
Automated ML supports model training for computer vision tasks like image classi
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md).
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
> [!NOTE] > Only Python 3.7 and 3.8 are compatible with automated ML support for computer vision tasks.
For a detailed description on task specific hyperparameters, please refer to [Hy
If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](../how-to-use-automl-small-object-detect.md). ## Example notebooks
-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
## Next steps
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
If you don't have an Azure subscription, create a free account before you begin.
This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages,
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
* Run `pip install azureml-opendatasets azureml-widgets` to get the required packages. ## Download and prepare data
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](../
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../includes/machine-learning-automl-sdk-version.md)]
Doing so, schedules distributed training of the NLP models and automatically sca
## Example notebooks See the sample notebooks for detailed code examples for each NLP task.
-* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
+* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb)
* [Multi-label text classification](
-https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
-* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
+https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb)
+* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb)
## Next steps + Learn more about [how and where to deploy a model](../how-to-deploy-online-endpoints.md).
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-features.md
In order to invoke BERT, set `enable_dnn: True` in your automl_settings and use
Automated ML takes the following steps for BERT.
-1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
+1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb).
2. **Concatenate all text columns into a single text column**, hence the `StringConcatTransformer` in the final model.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train.md
For this article you need,
To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md) for more information.
- * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
+ * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK.
[!INCLUDE [automl-sdk-version](../includes/machine-learning-automl-sdk-version.md)]
Use&nbsp;data&nbsp;streaming&nbsp;algorithms <br> [(studio UI experiments)](../h
Next determine where the model will be trained. An automated ML training experiment can run on the following compute options.
- * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example.
* **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#azure-machine-learning-compute-managed) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target.
- * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
+ * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks.
Consider these factors when choosing your compute target:
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md
In AutoML config, when using Azure Databricks add the following parameters:
## ML notebooks that work with Azure Databricks Try it out:
-+ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.**
++ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.** + Import these samples directly from your workspace. See below: ![Select Import](../media/how-to-configure-environment/azure-db-screenshot.png)
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
For more information on `az ml model register`, see the [reference documentation
You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine. <!-- pyhton nb call -->
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)]
To include multiple files in the model registration, set `model_path` to the path of a folder that contains the files.
The two things you need to accomplish in your entry script are:
For your initial deployment, use a dummy entry script that prints the data it receives. Save this file as `echo_score.py` inside of a directory called `source_dir`. This dummy script returns the data you send to it, so it doesn't use the model. But it is useful for testing that the scoring script is running.
You can use any [Azure Machine Learning inference curated environments](../conce
A minimal inference configuration can be written as: Save this file with the name `dummyinferenceconfig.json`.
Save this file with the name `dummyinferenceconfig.json`.
The following example demonstrates how to create a minimal environment with no pip dependencies, using the dummy scoring script you defined above.
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)]
For more information on environments, see [Create and manage environments for training and deployment](../how-to-use-environments.md).
For more information, see the [deployment schema](reference-azure-machine-learni
The following Python demonstrates how to create a local deployment configuration:
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)]
az ml model deploy -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
curl -v -X POST -H "content-type:application/json" \
# [Python SDK](#tab/python) <!-- python nb call -->
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)]
curl -v -X POST -H "content-type:application/json" \
Now it's time to actually load your model. First, modify your entry script: Save this file as `score.py` inside of `source_dir`.
Notice the use of the `AZUREML_MODEL_DIR` environment variable to locate your re
[!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)] Save this file as `inferenceconfig.json`
az ml model deploy -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
curl -v -X POST -H "content-type:application/json" \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)]
Change your deploy configuration to correspond to the compute target you've chos
The options available for a deployment configuration differ depending on the compute target you choose. Save this file as `re-deploymentconfig.json`.
For more information, see [this reference](reference-azure-machine-learning-cli.
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)]
az ml service get-logs -n myservice \
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)]
For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice).
For more information, see the documentation for [Model.deploy()](/python/api/azu
When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request.
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)]
The following table describes the different service states:
[!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)]
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)]
```azurecli-interactive az ml service delete -n myservice
Read more about [deleting a webservice](/cli/azure/ml(v1)/computetarget/create#a
# [Python SDK](#tab/python)
-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
+[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)]
To delete a deployed web service, use `service.delete()`. To delete a registered model, use `model.delete()`.
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models.md
arguments = ['--model_name', 'maskrcnn_resnet50_fpn', # enter the maskrcnn mode
-Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
+Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory.
```python script_run_config = ScriptRunConfig(source_directory='.', script='ONNX_batch_model_generator_automl_for_images.py',
Every ONNX model has a predefined set of input and output formats.
# [Multi-class image classification](#tab/multi-class)
-This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
+This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
### Input format
The output is an array of logits for all the classes/labels.
# [Multi-label image classification](#tab/multi-label)
-This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
+This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
### Input format
The output is an array of logits for all the classes/labels.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
The following table describes boxes, labels and scores returned for each sample
# [Object detection with YOLO](#tab/object-detect-yolo)
-This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
### Input format
Each cell in the list indicates box detections of a sample with shape `(n_boxes,
# [Instance segmentation](#tab/instance-segmentation)
-For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
+For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
>[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
batch, channel, height_onnx, width_onnx ```
-For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
+For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
```python import glob
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
If you already have a data labeling project and you want to use that data, you c
## Use conversion scripts
-If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml).
+If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml).
If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md).
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md
Make sure your code follows these tips:
### Horovod example
-* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
+* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod)
### DeepSpeed
Make sure your code follows these tips:
### DeepSpeed example
-* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/deepspeed/cifar)
+* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/deepspeed/cifar)
### Environment variables from Open MPI
run = Experiment(ws, 'experiment_name').submit(run_config)
### Pytorch per-process-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### <a name="per-node-launch"></a> Using torch.distributed.launch (per-node-launch)
run = Experiment(ws, 'experiment_name').submit(run_config)
### PyTorch per-node-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/pytorch/cifar-distributed)
### PyTorch Lightning
TF_CONFIG='{
### TensorFlow example -- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)
+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)
## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md
ws = Workspace.from_config()
### Get the data
-The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
+The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb).
### Prepare training script
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md
print(compute_target.get_status().serialize())
## Configure your training job
-For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
+For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning.
Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](how-to-set-up-training-targets.md).
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect.md
The following are the parameters you can use to control the tiling feature.
## Example notebooks
-See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model.
+See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model.
>[!NOTE] > All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models.md
You'll write code using the Python SDK in this tutorial and learn the following
* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace.
-* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
+* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook.
-This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages,
+This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages,
* Run `pip install azureml`
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
+* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment)
## Compute target setup
In this automated machine learning tutorial, you did the following tasks:
* [Learn how to set up AutoML to train computer vision models with Python](../how-to-auto-train-image-models.md). * [Learn how to configure incremental training on computer vision models](../how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](../reference-automl-images-hyperparameters.md).
-* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
+* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models.
> [!NOTE] > Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
The above code specifies a dataset that is based on the output of a pipeline ste
The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain.
-If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
+If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`.
If you're working from scratch, create a subdirectory called `keras-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`.
Once the data has been converted from the compressed format to CSV files, it can
With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory. Most of this code should be familiar to ML developers:
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
This issue can happen if:
1. Your account is a foreign account: the Grafana instance isn't registered in your home tenant. 1. If you recently addressed this problem and have been assigned a sufficient Grafana role, you may need to wait for some time before the cookie expires and get refreshed. This process normally takes 5 min. If in doubts, delete all cookies or start a private browser session to force a fresh new cookie with new role information.
+## Authorized users don't show up in Grafana Users configuration
+
+After you add a user to a Managed Grafana's built-in RBAC role, such as Grafana Viewer, you don't see that user listed in the Grafana's **Configuration** UI page right away. This behavior is *by design*. Managed Grafana's RBAC roles are stored in the Azure AD (AAD). For performance reasons, Managed Grafana doesn't automatically synchronize users assigned to the built-in roles to every instance. There is no notification for changes in RBAC assignments. Querying AAD periodically to get current assignments adds much extra load to the AAD service.
+
+There's no "fix" for this in itself. After a user signs into your Grafana instance, the user shows up in the **Users** tab under Grafana **Configuration**. You can see the corresponding role that user has been assigned to.
+ ## Azure Managed Grafana dashboard panel doesn't display any data One or several Managed Grafana dashboard panels show no data.
managed-instance-apache-cassandra Best Practice Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md
For more information refer to [Virtual Machine and disk performance](../virtual-
### Network performance
-In most cases network performance is sufficient. However, if you are frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics:
+In most cases network performance is sufficient. However, if you're frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics:
:::image type="content" source="./media/best-practice-performance/metrics-network.png" alt-text="Screenshot of network metrics." lightbox="./media/best-practice-performance/metrics-network.png" border="true":::
If you only see the network elevated for a small number of nodes, you might have
### Too many connected clients
-Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this does not exceed tolerable limits.
+Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this doesn't exceed tolerable limits.
:::image type="content" source="./media/best-practice-performance/metrics-connections.png" alt-text="Screenshot of connected client metrics." lightbox="./media/best-practice-performance/metrics-connections.png" border="true"::: ### Disk space
-In most cases, there is sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
+In most cases, there's sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
> [!NOTE] > In order to ensure available space for compaction, disk utilization should be kept to around 50%.
Our default formula assigns half the VM's memory to the JVM with an upper limit
In most cases memory gets reclaimed effectively by the Java garbage collector, but especially if the CPU is often above 80% there aren't enough CPU cycles for the garbage collector left. So any CPU performance problems should be addresses before memory problems.
-If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you are on a SKU with limited memory. In most cases, you will need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
+If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you're on a SKU with limited memory. In most cases, you'll need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
If you indeed need more memory, you can:
You might encounter this warning in the [CassandraLogs](monitor-clusters.md#crea
`Writing large partition <table> (105.426MiB) to sstable <file>`
-This indicates a problem in the data model. Here is a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed.
+This indicates a problem in the data model. Here's a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed.
+
+## Specialized optimizations
+### Compression
+Cassandra allows the selection of an appropriate compression algorithm when a table is created (see [Compression](https://cassandra.apache.org/doc/latest/cassandra/operating/compression.html)) The default is LZ4 which is excellent
+for throughput and CPU but consumes more space on disk. Using Zstd (Cassandra 4.0 and up) saves about ~12% space with
+minimal CPU overhead.
+
+### Optimizing memtable heap space
+Our default is to use 1/4 of the JVM heap for [memtable_heap_space](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#memtable_heap_space)
+in the cassandra.yaml. For write oriented application and/or on SKUs with small memory
+this can lead to frequent flushing and fragmented sstables thus requiring more compaction.
+In such cases increasing it to at least 4048 might be beneficial but requires careful benchmarking
+to make sure other operations (e.g. reads) aren't affected.
## Next steps In this article, we laid out some best practices for optimal performance. You can now start working with the cluster: > [!div class="nextstepaction"]
-> [Create a cluster using Azure Portal](create-cluster-portal.md)
+> [Create a cluster using Azure Portal](create-cluster-portal.md)
managed-instance-apache-cassandra Monitor Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/monitor-clusters.md
Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup
## Audit whitelist > ![NOTE]
-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
By default, audit logging creates a record for every login attempt and CQL query. The result can be rather overwhelming and increase overhead. You can use the audit whitelist feature in Cassandra 3.11 to set what operations *don't* create an audit record. The audit whitelist feature is enabled by default in Cassandra 3.11. To learn how to configure your whitelist, see [Role-based whitelist management](https://github.com/Ericsson/ecaudit/blob/release/c2.2/doc/role_whitelist_management.md).
migrate How To Automate Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md
ms. Previously updated : 11/15/2022 Last updated : 05/11/2023
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
ms. Previously updated : 08/18/2022 Last updated : 05/11/2023
mysql Concepts Networking Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-public.md
Granting permission to an IP address is called a firewall rule. If a connection
You can consider enabling connections from all Azure data center IP addresses if a fixed outgoing IP address isn't available for your Azure service.
-> [!IMPORTANT]
-> The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users.
+> [!IMPORTANT]
+> - The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users.
+> - You can create a maximum of 500 IP firewall rules.
+>
Learn how to enable and manage public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
Consider the following points when access to the Microsoft Azure Database for My
- Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md) - Learn how to [use TLS](how-to-connect-tls-ssl.md)++
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
Last updated 11/21/2022 -+
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
In this tutorial, you learn how to:
- Configure data encryption for restoration. - Configure data encryption for replica servers.
+ > [!NOTE]
+> Azure key vault access configuration now supports two types of permission models - [Azure role-based access control](../../role-based-access-control/overview.md) and [Vault access policy](../../key-vault/general/assign-access-policy.md). The tutorial describes configuring data encryption for Azure Database for MySQL - Flexible server using Vault access policy. However, you can choose to use Azure RBAC as permission model to grant access to Azure Key Vault. To do so, you need any built-in or custom role that has below three permissions and assign it through "role assignments" using Access control (IAM) tab in the keyvault: a) KeyVault/vaults/keys/wrap/action b) KeyVault/vaults/keys/unwrap/action c) KeyVault/vaults/keys/read
+++ ## Prerequisites - An Azure account with an active subscription.
After your Azure Database for MySQL - Flexible Server is encrypted with a custom
- [Customer managed keys data encryption](concepts-customer-managed-key.md) - [Data encryption with Azure CLI](how-to-data-encryption-cli.md)++
network-watcher Required Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md
Title: Azure RBAC permissions required to use Azure Network Watcher capabilities description: Learn which Azure role-based access control (Azure RBAC) permissions are required to use Azure Network Watcher capabilities.- + Previously updated : 04/03/2023-- Last updated : 08/18/2023 # Azure role-based access control permissions required to use Network Watcher capabilities
Azure role-based access control (Azure RBAC) enables you to assign only the spec
| Microsoft.Network/networkWatchers/write | Create or update a network watcher | | Microsoft.Network/networkWatchers/delete | Delete a network watcher |
-## NSG flow logs
+## Flow logs
| Action | Description | | | - | | Microsoft.Network/networkWatchers/configureFlowLog/action | Configure a flow Log | | Microsoft.Network/networkWatchers/queryFlowLogStatus/action | Query status for a flow log |
+Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action | Fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account |
## Connection troubleshoot
Microsoft.Network/networkWatchers/packetCaptures/queryStatus/read | View the sta
Network Watcher capabilities also require the following actions:
-| Action(s) | Description |
-| | - |
-| Microsoft.Authorization/\*/Read | Used to fetch Azure role assignments and policy definitions |
-| Microsoft.Resources/subscriptions/resourceGroups/Read | Used to enumerate all the resource groups in a subscription |
-| Microsoft.Storage/storageAccounts/Read | Used to get the properties for the specified storage account |
-| Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action| Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account |
-| Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Used to log in to the VM, do a packet capture and upload it to storage account|
-| Microsoft.Compute/virtualMachines/extensions/Read </br> Microsoft.Compute/virtualMachines/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary |
-| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write| Used to access virtual machine scale sets, do packet captures and upload them to storage account|
-| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary |
-| Microsoft.Insights/alertRules/* | Used to set up metric alerts |
-| Microsoft.Support/* | Used to create and update support tickets from Network Watcher |
+| Action(s) | Description |
+| | - |
+| Microsoft.Authorization/\*/Read | Fetch Azure role assignments and policy definitions |
+| Microsoft.Resources/subscriptions/resourceGroups/Read | Enumerate all the resource groups in a subscription |
+| Microsoft.Storage/storageAccounts/Read | Get the properties for the specified storage account |
+| Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action | Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account |
+| Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Log in to the VM, do a packet capture and upload it to storage account |
+| Microsoft.Compute/virtualMachines/extensions/Read, </br> Microsoft.Compute/virtualMachines/extensions/Write | Check if Network Watcher extension is present, and install if necessary |
+| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write | Access virtual machine scale sets, do packet captures and upload them to storage account |
+| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Check if Network Watcher extension is present, and install if necessary |
+| Microsoft.Insights/alertRules/* | Set up metric alerts |
+| Microsoft.Support/* | Create and update support tickets from Network Watcher |
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Title: Traffic analytics schema and data aggregation
description: Learn about schema and data aggregation in Azure Network Watcher traffic analytics to analyze flow logs. --- Previously updated : 04/11/2023 -++ Last updated : 08/16/2023 # Schema and data aggregation in Azure Network Watcher traffic analytics
Traffic analytics is a cloud-based solution that provides visibility into user a
## Data aggregation
+# [**NSG flow logs**](#tab/nsg)
+ - All flow logs at a network security group between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t` are captured at one-minute intervals as blobs in a storage account. - Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.-- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol` (TCP or UDP) (Note: source port is excluded for aggregation) are clubbed into a single flow by traffic analytics.-- This single record is decorated (details in the section below) and ingested in Log Analytics by traffic analytics. This process can take up to 1 hour max.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation).
+- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour.
- `FlowStartTime_t` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`.-- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Log Analytics user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
+
+# [**VNet flow logs (preview)**](#tab/vnet)
+
+- All flow logs between `FlowIntervalStartTime` and `FlowIntervalEndTime` are captured at one-minute intervals as blobs in a storage account.
+- Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation).
+- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour.
+- `FlowStartTime` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
++ The following query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days.
TableWithBlobId
The previous query constructs a URL to access the blob directly. The URL with placeholders is as follows: ```
-https://{saName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+https://{storageAccountName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networkSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
``` ## Traffic analytics schema
+Traffic analytics is built on top of Azure Monitor logs, so you can run custom queries on data decorated by traffic analytics and set alerts.
+
+The following table lists the fields in the schema and what they signify.
+
+# [**NSG flow logs**](#tab/nsg)
+
+| Field | Format | Comments |
+| -- | | -- |
+| **TableName** | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
+| **SubType_s** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType_s** are for internal use. |
+| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| **TimeProcessed_t** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| **FlowIntervalStartTime_t** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime_t** | Date and time in UTC | Ending time of the flow log processing interval. |
+| **FlowStartTime_t** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. This flow gets aggregated based on aggregation logic. |
+| **FlowEndTime_t** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). |
+| **FlowType_s** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
+| **SrcIP_s** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **DestIP_s** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **VMIP_s** | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
+| **DestPort_d** | Destination Port | Port at which traffic is incoming. |
+| **L4Protocol_s** | - T <br> - U | Transport Protocol. T = TCP <br> U = UDP. |
+| **L7Protocol_s** | Protocol Name | Derived from destination port. |
+| **FlowDirection_s** | - I = Inbound <br> - O = Outbound | Direction of the flow: in or out of network security group per flow log. |
+| **FlowStatus_s** | - A = Allowed <br> - D = Denied | Status of flow whether allowed or denied by the network security group per flow log. |
+| **NSGList_s** | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network security group associated with the flow. |
+| **NSGRules_s** | \<Index value 0>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | Network security group rule that allowed or denied this flow. |
+| **NSGRule_s** | NSG_RULENAME | Network security group rule that allowed or denied this flow. |
+| **NSGRuleType_s** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+| **MACAddress_s** | MAC Address | MAC address of the NIC at which the flow was captured. |
+| **Subscription_s** | Subscription of the Azure virtual network / network interface / virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Subscription1_s** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Subscription2_s** | Subscription ID | Subscription ID of virtual network/ network interface / virtual machine that the destination IP in the flow belongs to. |
+| **Region_s** | Azure region of virtual network / network interface / virtual machine that the IP in the flow belongs to. | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Region1_s** | Azure Region | Azure region of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Region2_s** | Azure Region | Azure region of virtual network that the destination IP in the flow belongs to. |
+| **NIC_s** | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. |
+| **NIC1_s** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
+| **NIC2_s** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
+| **VM_s** | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. |
+| **VM1_s** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. |
+| **VM2_s** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. |
+| **Subnet_s** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the NIC_s. |
+| **Subnet1_s** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. |
+| **Subnet2_s** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. |
+| **ApplicationGateway1_s** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. |
+| **ApplicationGateway2_s** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. |
+| **LoadBalancer1_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. |
+| **LoadBalancer2_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. |
+| **LocalNetworkGateway1_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. |
+| **LocalNetworkGateway2_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. |
+| **ConnectionType_s** | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. |
+| **ConnectionName_s** | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it is formatted as \<gateway name\>_\<VPN Client IP\>. |
+| **ConnectingVNets_s** | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks are populated here. |
+| **Country_s** | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field share the same country code. |
+| **AzureRegion_s** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field share the Azure region. |
+| **AllowedInFlows_d** | | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| **DeniedInFlows_d** | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| **AllowedOutFlows_d** | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| **DeniedOutFlows_d** | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| **FlowCount_d** | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
+| **InboundPackets_d** | Represents packets sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **OutboundPackets_d** | Represents packets sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **InboundBytes_d** | Represents bytes sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **OutboundBytes_d** | Represents bytes sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **CompletedFlows_d**| | Populated with nonzero value only for Version 2 of NSG flow log schema. |
+| **PublicIPs_s** | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **SrcPublicIPs_s** | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **DestPublicIPs_s** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+ > [!IMPORTANT]
-> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing need to parse the `FlowDirection` field so that queries are simpler. These are changes in the updated schema:
+> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing the need to parse the `FlowDirection` field so that queries are simpler. The updated schema had the following changes:
> > - `FASchemaVersion_s` updated from 1 to 2. > - Deprecated fields: `VMIP_s`, `Subscription_s`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d` > - New fields: `SrcPublicIPs_s`, `DestPublicIPs_s`, `NSGRule_s`
->
-> Deprecated fields are available until November 2022.
->
-
-Traffic analytics is built on top of Log Analytics, so you can run custom queries on data decorated by traffic analytics and set alerts on the same.
-The following table lists the fields in the schema and what they signify.
+# [**VNet flow logs (preview)**](#tab/vnet)
| Field | Format | Comments | | -- | | -- |
-| TableName | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
-| SubType_s | FlowLog | Subtype for the flow logs. Use only "FlowLog", other values of SubType_s are for internal workings of the product. |
-| FASchemaVersion_s | 2 | Schema version. Doesn't reflect NSG flow log version. |
-| TimeProcessed_t | Date and Time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
-| FlowIntervalStartTime_t | Date and Time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
-| FlowIntervalEndTime_t | Date and Time in UTC | Ending time of the flow log processing interval. |
-| FlowStartTime_t | Date and Time in UTC | First occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. This flow gets aggregated based on aggregation logic. |
-| FlowEndTime_t | Date and Time in UTC | Last occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as ΓÇ£BΓÇ¥ in the raw flow record). |
-| FlowType_s | * IntraVNet <br> * InterVNet <br> * S2S <br> * P2S <br> * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow <br> * Unknown Private <br> * Unknown | Definition in notes below the table. |
-| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
-| DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
-| VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
-| DestPort_d | Destination Port | Port at which traffic is incoming. |
-| L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP. |
-| L7Protocol_s | Protocol Name | Derived from destination port. |
-| FlowDirection_s | * I = Inbound<br> * O = Outbound | Direction of the flow in/out of NSG as per flow log. |
-| FlowStatus_s | * A = Allowed by NSG Rule <br> * D = Denied by NSG Rule | Status of flow allowed/nblocked by NSG as per flow log. |
-| NSGList_s | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network Security Group (NSG) associated with the flow. |
-| NSGRules_s | \<Index value 0)>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | NSG rule that allowed or denied this flow. |
-| NSGRule_s | NSG_RULENAME | NSG rule that allowed or denied this flow. |
-| NSGRuleType_s | * User Defined * Default | The type of NSG Rule used by the flow. |
-| MACAddress_s | MAC Address | MAC address of the NIC at which the flow was captured. |
-| Subscription_s | Subscription of the Azure virtual network/ network interface/ virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| Subscription1_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
-| Subscription2_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the destination IP in the flow belongs to. |
-| Region_s | Azure region of virtual network/ network interface/ virtual machine to which the IP in the flow belongs to | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| Region1_s | Azure Region | Azure region of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
-| Region2_s | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
-| NIC_s | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. |
-| NIC1_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
-| NIC2_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
-| VM_s | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. |
-| VM1_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. |
-| VM2_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. |
-| Subnet_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the NIC_s. |
-| Subnet1_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. |
-| Subnet2_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. |
-| ApplicationGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. |
-| ApplicationGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. |
-| LoadBalancer1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. |
-| LoadBalancer2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. |
-| LocalNetworkGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. |
-| LocalNetworkGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. |
-| ConnectionType_s | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. |
-| ConnectionName_s | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it will be formatted as \<gateway name\>_\<VPN Client IP\>. |
-| ConnectingVNets_s | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks will be populated here. |
-| Country_s | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field will share the same country code. |
-| AzureRegion_s | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field will share the Azure region. |
-| AllowedInFlows_d | | Count of inbound flows that were allowed. This represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
-| DeniedInFlows_d | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
-| AllowedOutFlows_d | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
-| DeniedOutFlows_d | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
-| FlowCount_d | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
-| InboundPackets_d | Represents packets sent from the destination to the source of the flow | This field is only populated for Version 2 of NSG flow log schema. |
-| OutboundPackets_d | Represents packets sent from the source to the destination of the flow | This field is only populated for Version 2 of NSG flow log schema. |
-| InboundBytes_d | Represents bytes sent from the destination to the source of the flow | This field is only populated Version 2 of NSG flow log schema. |
-| OutboundBytes_d | Represents bytes sent from the source to the destination of the flow | This field is only populated Version 2 of NSG flow log schema. |
-| CompletedFlows_d | | This field is only populated with nonzero value for Version 2 of NSG flow log schema. |
-| PublicIPs_s | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
-| SrcPublicIPs_s | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
-| DestPublicIPs_s | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **TableName** | NTANetAnalytics | Table for traffic analytics data. |
+| **SubType** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType** are for internal use. |
+| **FASchemaVersion** | 3 | Schema version. Doesn't reflect NSG flow log version. |
+| **TimeProcessed** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| **FlowIntervalStartTime** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime**| Date and time in UTC | Ending time of the flow log processing interval. |
+| **FlowStartTime** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. This flow gets aggregated based on aggregation logic. |
+| **FlowEndTime** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). |
+| **FlowType** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
+| **SrcIP** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **DestIP** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **TargetResourceId** | ResourceGroupName/ResourceName | The ID of the resource at which flow logging and traffic analytics is enabled. |
+| **TargetResourceType** | VirtualNetwork/Subnet/NetworkInterface | Type of resource at which flow logging and traffic analytics is enabled (virtual network, subnet, NIC or network security group).|
+| **FlowLogResourceId** | ResourceGroupName/NetworkWatcherName/FlowLogName | The resource ID of the flow log. |
+| **DestPort** | Destination Port | Port at which traffic is incoming. |
+| **L4Protocol** | - T <br> - U | Transport Protocol. **T** = TCP <br> **U** = UDP |
+| **L7Protocol** | Protocol Name | Derived from destination port. |
+| **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the network security group per flow log. |
+| **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by network security group per flow log. |
+| **NSGList** |\<SUBSCRIPTIONID>/<RESOURCEGROUP_NAME>/<NSG_NAME> | Network security group associated with the flow. |
+| **NSGRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. |
+| **NSGRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+| **MACAddress** | MAC Address | MAC address of the NIC at which the flow was captured. |
+| **SrcSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **DestSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the destination IP in the flow belongs to. |
+| **SrcRegion** | Azure Region | Azure region of virtual network / network interface / virtual machine to which the source IP in the flow belongs to. |
+| **DestRegion** | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
+| **SecNIC** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
+| **DestNIC** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
+| **SrcVM** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual machine associated with the source IP in the flow. |
+| **DestVM** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual machine associated with the destination IP in the flow. |
+| **SrcSubnet** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the source IP in the flow. |
+| **DestSubnet** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the destination IP in the flow. |
+| **SrcApplicationGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the source IP in the flow. |
+| **DestApplicationGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the destination IP in the flow. |
+| **SrcLoadBalancer** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the source IP in the flow. |
+| **DestLoadBalancer** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the destination IP in the flow. |
+| **SrcLocalNetworkGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the source IP in the flow. |
+| **DestLocalNetworkGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the destination IP in the flow. |
+| **ConnectionType** | Possible values are VNetPeering, VpnGateway, and ExpressRoute | The connection type. |
+| **ConnectionName** | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | The connection name. For flow type P2S, it's formatted as \<GatewayName>_\<VPNClientIP> |
+| **ConnectingVNets** | Space separated list of virtual network names. | In hub and spoke topology, hub virtual networks are populated here. |
+| **Country** | Two-letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs field share the same country code. |
+| **AzureRegion** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs field share the Azure region. |
+| **AllowedInFlows**| - | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| **DeniedInFlows** | - | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| **AllowedOutFlows** | - | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| **DeniedOutFlows** | - | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| **FlowCount** | Deprecated. Total flows that matched the same four-tuple. In flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. | - |
+| **PacketsDestToSrc** | Represents packets sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **PacketsSrcToDest** | Represents packets sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **BytesDestToSrc** | Represents bytes sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **BytesSrcToDest** | Represents bytes sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **CompletedFlows** | - | Populated with nonzero value only for the Version 2 of NSG flow log schema. |
+| **SrcPublicIPs** | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **DestPublicIPs** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **FlowEncryption** | - Encrypted <br>- Unencrypted <br>- Unsupported hardware <br>- Software not ready <br>- Drop due to no encryption <br>- Discovery not supported <br>- Destination on same host <br>- Fall back to no encryption. | Encryption level of flows. |
+
+> [!NOTE]
+> *NTANetAnalytics* in VNet flow logs replaces *AzureNetworkAnalytics_CL* used in NSG flow logs.
++ ## Public IP details schema
Traffic analytics provides WHOIS data and geographic location for all public IPs
The following table details public IP schema:
+# [**NSG flow logs**](#tab/nsg)
+
+| Field | Format | Comments |
+| -- | | -- |
+| **TableName** | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
+| **SubType_s** | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
+| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| **FlowIntervalStartTime_t** | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime_t** | Date and Time in UTC | End time of the flow log processing interval. |
+| **FlowType_s** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. |
+| **IP** | Public IP | Public IP whose information is provided in the record. |
+| **Location** | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
+| **PublicIPDetails** | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
+| **ThreatType** | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
+| **ThreatDescription** | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
+| **DNSDomain** | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+
+# [**VNet flow logs (preview)**](#tab/vnet)
+ | Field | Format | Comments | | -- | | -- |
-| TableName | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
-| SubType_s | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
-| FASchemaVersion_s | 2 | Schema version. It doesn't reflect NSG flow log version. |
-| FlowIntervalStartTime_t | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
-| FlowIntervalEndTime_t | Date and Time in UTC | End time of the flow log processing interval. |
-| FlowType_s | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | Definition in notes below the table. |
-| IP | Public IP | Public IP whose information is provided in the record. |
-| Location | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
-| PublicIPDetails | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
-| ThreatType | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
-| ThreatDescription | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
-| DNSDomain | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+| **TableName**| NTAIpDetails | Table that contains traffic analytics IP details data. |
+| **SubType**| FlowLog | Subtype for the flow logs. Use only **FlowLog**. Other values of SubType are for internal workings of the product. |
+| **FASchemaVersion** | 2 | Schema version. Doesn't reflect NSG flow Log version. |
+| **FlowIntervalStartTime**| Date and time in UTC | Start time of the flow log processing interval (the time from which flow interval is measured). |
+| **FlowIntervalEndTime**| Date and time in UTC | End time of the flow log processing interval. |
+| **FlowType** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. |
+| **IP**| Public IP | Public IP whose information is provided in the record. |
+| **PublicIPDetails** | Information about IP | **For AzurePublic IP**: Azure Service owning the IP or **Microsoft Virtual Public IP** for the IP 168.63.129.16. <br> **ExternalPublic/Malicious IP**: WhoIS information of the IP. |
+| **ThreatType** | Threat posed by malicious IP | *For Malicious IPs only*. One of the threats from the list of currently allowed values. For more information, see [Notes](#notes). |
+| **DNSDomain** | DNS domain | *For Malicious IPs only*. Domain name associated with this IP. |
+| **ThreatDescription** |Description of the threat | *For Malicious IPs only*. Description of the threat posed by the malicious IP. |
+| **Location** | Location of the IP | **For Azure Public IP**: Azure region of virtual network / network interface / virtual machine to which the IP belongs or Global for IP 168.63.129.16. <br> **For External Public IP and Malicious IP**: two-letter country code (ISO 3166-1 alpha-2) where IP is located. |
+
+> [!NOTE]
+> *NTAIPDetails* in VNet flow logs replaces *AzureNetworkAnalyticsIPDetails_CL* used in NSG flow logs.
++ List of threat types:
List of threat types:
| Phishing | Indicators relating to a phishing campaign. | | Proxy | Indicator of a proxy service. | | PUA | Potentially Unwanted Application. |
-| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or will require manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
+| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or requires manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
## Notes -- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to log analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
+- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to Log Analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
- Some field names are appended with `_s` or `_d`, which don't signify source and destination but indicate the data types *string* and *decimal* respectively. - Based on the IP addresses involved in the flow, we categorize the flows into the following flow types: - `IntraVNet`: Both IP addresses in the flow reside in the same Azure virtual network.
List of threat types:
## Next Steps -- To learn more about traffic analytics, see [Azure Network Watcher Traffic analytics](traffic-analytics.md).-- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics frequently asked questions.--
+- To learn more about traffic analytics, see [Traffic analytics overview](traffic-analytics.md).
+- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics most frequently asked questions.
network-watcher Vnet Flow Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md
+
+ Title: Manage VNet flow logs - Azure CLI
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using the Azure CLI.
++++ Last updated : 08/16/2023+++
+# Create, change, enable, disable, or delete VNet flow logs using the Azure CLI
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using the Azure CLI](../virtual-network/quick-create-cli.md).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using the Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli).
+
+- Bash environment in [Azure Cloud Shell](https://shell.azure.com) or the Azure CLI installed locally. To learn more about using Bash in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - Bash](../cloud-shell/quickstart.md).
+
+ - If you choose to install and use Azure CLI locally, this article requires the Azure CLI version 2.39.0 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
+
+## Register insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [az provider register](/cli/azure/provider#az-provider-register) to register it.
+
+```azurecli-interactive
+# Register Microsoft.Insights provider.
+az provider register --namespace Microsoft.Insights
+```
+
+## Enable VNet flow logs
+
+Use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log.
+
+```azurecli-interactive
+# Create a VNet flow log.
+az network watcher flow-log create --location eastus --resource-group myResourceGroup --name myVNetFlowLog --vnet myVNet --storage-account myStorageAccount
+```
+
+## Enable VNet flow logs and traffic analytics
+
+Use [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create) to create a traffic analytics workspace, and then use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log that uses it.
+
+```azurecli-interactive
+# Create a traffic analytics workspace.
+az monitor log-analytics workspace create --name myWorkspace --resource-group myResourceGroup --location eastus
+
+# Create a VNet flow log.
+az network watcher flow-log create --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --workspace myWorkspace --interval 10 --traffic-analytics true
+```
+
+## List all flow logs in a region
+
+Use [az network watcher flow-log list](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-list) to list all flow log resources in a particular region in your subscription.
+
+```azurecli-interactive
+# Get all flow logs in East US region.
+az network watcher flow-log list --location eastus --out table
+```
+
+## View VNet flow log resource
+
+Use [az network watcher flow-log show](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-show) to see details of a flow log resource.
+
+```azurecli-interactive
+# Get the flow log details.
+az network watcher flow-log show --name myVNetFlowLog --resource-group NetworkWatcherRG --location eastus
+```
+
+## Download a flow log
+
+To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+VNet flow log files saved to a storage account follow the logging path shown in the following example:
+
+```
+https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+```
+
+## Disable traffic analytics on flow log resource
+
+To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to a storage account, use [az network watcher flow-log update](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-update).
+
+```azurecli-interactive
+# Update the VNet flow log.
+az network watcher flow-log update --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --traffic-analytics false
+```
+
+## Delete a VNet flow log resource
+
+To delete a VNet flow log resource, use [az network watcher flow-log delete](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete).
+
+```azurecli-interactive
+# Delete the VNet flow log.
+az network watcher flow-log delete --name myVNetFlowLog --location eastus
+```
+
+## Next steps
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
+
+ Title: VNet flow logs (preview)
+
+description: Learn about VNet flow logs feature of Azure Network Watcher.
++++ Last updated : 08/16/2023++
+# VNet flow logs (preview)
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network (VNet) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a virtual network. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. Network Watcher VNet flow logs capability overcomes some of the existing limitations of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
+
+## Why use flow logs?
+
+It's vital to monitor, manage, and know your network so that you can protect and optimize it. You may need to know the current state of the network, who's connecting, and where users are connecting from. You may also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen.
+
+Flow logs are the source of truth for all network activity in your cloud environment. Whether you're in a startup that's trying to optimize resources or a large enterprise that's trying to detect intrusion, flow logs can help. You can use them for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more.
+
+## Common use cases
+
+#### Network monitoring
+
+- Identify unknown or undesired traffic.
+- Monitor traffic levels and bandwidth consumption.
+- Filter flow logs by IP and port to understand application behavior.
+- Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards.
+
+#### Usage monitoring and optimization
+
+- Identify top talkers in your network.
+- Combine with GeoIP data to identify cross-region traffic.
+- Understand traffic growth for capacity forecasting.
+- Use data to remove overly restrictive traffic rules.
+
+#### Compliance
+
+- Use flow data to verify network isolation and compliance with enterprise access rules.
+
+#### Network forensics and security analysis
+
+- Analyze network flows from compromised IPs and network interfaces.
+- Export flow logs to any SIEM or IDS tool of your choice.
+
+## VNet flow logs compared to NSG flow logs
+
+Both VNet flow logs and [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) record IP traffic but they differ in their behavior & capabilities. VNet flow logs simplify the scope of traffic monitoring by allowing you to enable logging at [virtual networks](../virtual-network/virtual-networks-overview.md), ensuring that traffic through all supported workloads within a virtual network are recorded. VNet flow logs also avoids the need to enable multi-level flow logging such as in cases of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md#best-practices) where network security groups are configured at both subnet & NIC.
+
+In addition to existing support to identify allowed/denied traffic by [network security group rules](../virtual-network/network-security-groups-overview.md), VNet flow logs support identification of traffic allowed/denied by [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). VNet flow logs also support evaluating the encryption status of your network traffic in scenarios where [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md) is enabled.
+
+## How logging works
+
+Key properties of VNet flow logs include:
+
+- Flow logs operate at Layer 4 of the Open Systems Interconnection (OSI) model and record all IP flows going through a virtual network.
+- Logs are collected at 1-minute intervals through the Azure platform and don't affect your Azure resources or network traffic.
+- Logs are written in the JSON (JavaScript Object Notation) format.
+- Each log record contains the network interface (NIC) the flow applies to, 5-tuple information, traffic direction, flow state, encryption state and throughput information.
+- All traffic flows in your network are evaluated through the rules in the applicable [network security group rules](../virtual-network/network-security-groups-overview.md) or [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). For more information, see [Log format](#log-format).
+
+## Log format
+
+VNet flow logs have the following properties:
+
+- `time`: Time in UTC when the event was logged.
+- `flowLogVersion`: Version of flow log schema.
+- `flowLogGUID`: The resource GUID of the FlowLog resource.
+- `macAddress`: MAC address of the network interface where the event was captured.
+- `category`: Category of the event. The category is always `FlowLogFlowEvent`.
+- `flowLogResourceID`: Resource ID of the FlowLog resource.
+- `targetResourceID`: Resource ID of target resource associated to the FlowLog resource.
+- `operationName`: Always `FlowLogFlowEvent`.
+- `flowRecords`: Collection of flow records.
+ - `flows`: Collection of flows. This property has multiple entries for different ACLs.
+ - `aclID`: Identifier of the resource evaluating traffic, either a network security group or Virtual Network Manager. For cases like traffic denied by encryption, this value is `unspecified`.
+ - `flowGroups`: Collection of flow records at a rule level.
+ - `rule`: Name of the rule that allowed or denied the traffic. For traffic denied due to encryption, this value is `unspecified`.
+ - `flowTuples`: string that contains multiple properties for the flow tuple in a comma-separated format:
+ - `Time Stamp`: Time stamp of when the flow occurred in UNIX epoch format.
+ - `Source IP`: Source IP address.
+ - `Destination IP`: Destination IP address.
+ - `Source port`: Source port.
+ - `Destination port`: Destination Port.
+ - `Protocol`: Layer 4 protocol of the flow expressed in IANA assigned values.
+ - `Flow direction`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound.
+ - `Flow state`: State of the flow. Possible states are:
+ - `B`: Begin, when a flow is created. No statistics are provided.
+ - `C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals.
+ - `E`: End, when a flow is terminated. Statistics are provided.
+ - `D`: Deny, when a flow is denied.
+ - `Flow encryption`: Encryption state of the flow. Possible values are:
+ - `X`: Encrypted.
+ - `NX`: Unencrypted.
+ - `NX_HW_NOT_SUPPORTED`: Unsupported hardware.
+ - `NX_SW_NOT_READY`: Software not ready.
+ - `NX_NOT_ACCEPTED`: Drop due to no encryption.
+ - `NX_NOT_SUPPORTED`: Discovery not supported.
+ - `NX_LOCAL_DST`: Destination on same host.
+ - `NX_FALLBACK`: Fall back to no encryption.
+ - `Packets sent`: Total number of packets sent from source to destination since the last update.
+ - `Bytes sent`: Total number of packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload.
+ - `Packets received`: Total number of packets sent from destination to source since the last update.
+ - `Bytes received`: Total number of packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload.
+
+Traffic in your virtual networks is Unencrypted (NX) by default. For encrypted traffic, enable [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
+
+`Flow encryption` has the following possible encryption statuses:
+
+| Encryption Status | Description |
+| -- | -- |
+| `X` | **Connection is encrypted**. Encryption is configured and the platform has encrypted the connection. |
+| `NX` | **Connection is Unencrypted**. This event is logged in two scenarios: <br> - When encryption isn't configured. <br> - When an encrypted virtual machine communicates with an endpoint that lacks encryption (such as an internet endpoint). |
+| `NX_HW_NOT_SUPPORTED` | **Unsupported hardware**. Encryption is configured, but the virtual machine is running on a host that doesn't support encryption. This issue can usually be the case where the FPGA isn't attached to the host, or could be faulty. Report this issue to Microsoft for investigation. |
+| `NX_SW_NOT_READY` | **Software not ready**. Encryption is configured, but the software component (GFT) in the host networking stack isn't ready to process encrypted connections. This issue can happen when the virtual machine is booting for the first time / restarting / redeployed. It can also happen in the case where there's an update to the networking components on the host where virtual machine is running. In all these scenarios, the packet gets dropped. The issue should be temporary and encryption should start working once either the virtual machine is fully up and running or the software update on the host is complete. If the issue is seen for longer durations, report it to Microsoft for investigation. |
+| `NX_NOT_ACCEPTED` | **Drop due to no encryption**. Encryption is configured on both source and destination endpoints with drop on unencrypted policy. If there's a failure to encrypt traffic, packet is dropped. |
+| `NX_NOT_SUPPORTED` | **Discovery not supported**. Encryption is configured, but the encryption session wasn't established, as discovery isn't supported in the host networking stack. In this case, packet is dropped. If you encounter this issue, report it to Microsoft for investigation. |
+| `NX_LOCAL_DST` | **Destination on same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. |
+| `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the allow unencrypted policy for both source and destination endpoints. Encryption was attempted, but ran into an issue. In this case, connection is allowed but it isn't encrypted. An example of this can be, the virtual machine initially landed on a node that supports encryption, but later, this support was disabled. |
++
+## Sample log record
+
+In the following example of VNet flow logs, multiple records that follow the property list described earlier.
+
+```json
+{
+ "records": [
+ {
+ "time": "2022-09-14T09:00:52.5625085Z",
+ "flowLogVersion": 4,
+ "flowLogGUID": "abcdef01-2345-6789-0abc-def012345678",
+ "macAddress": "00224871C205",
+ "category": "FlowLogFlowEvent",
+ "flowLogResourceID": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG",
+ "targetResourceID": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet",
+ "operationName": "FlowLogFlowEvent",
+ "flowRecords": {
+ "flows": [
+ {
+ "aclID": "00000000-1234-abcd-ef00-c1c2c3c4c5c6",
+ "flowGroups": [
+ {
+ "rule": "DefaultRule_AllowInternetOutBound",
+ "flowTuples": [
+ "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0",
+ "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580",
+ "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0",
+ "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108",
+ "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0",
+ "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466"
+ ]
+ }
+ ]
+ },
+ {
+ "aclID": "01020304-abcd-ef00-1234-102030405060",
+ "flowGroups": [
+ {
+ "rule": "BlockHighRiskTCPPortsFromInternet",
+ "flowTuples": [
+ "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0",
+ "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0"
+ ]
+ },
+ {
+ "rule": "Internet",
+ "flowTuples": [
+ "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0",
+ "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0",
+ "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0",
+ "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0",
+ "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0",
+ "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0",
+ "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
+## Log tuple and bandwidth calculation
++
+Here's an example bandwidth calculation for flow tuples from a TCP conversation between **185.170.185.105:35370** and **10.2.0.4:23**:
+
+`1493763938,185.170.185.105,10.2.0.4,35370,23,6,I,B,NX,,,,`
+`1493695838,185.170.185.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880`
+`1493696138,185.170.185.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072`
+
+For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000.
+
+## Considerations for VNet flow logs
+
+### Storage account
+
+- **Location**: The storage account used must be in the same region as the virtual network.
+- **Performance tier**: Currently, only standard-tier storage accounts are supported.
+- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs.
+
+### Cost
+
+VNet flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
+
+Pricing of VNet flow logs doesn't include the underlying costs of storage. Using the retention policy feature with VNet flow logs means incurring separate storage costs for extended periods of time.
+
+If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
+
+## Pricing
+
+VNet flow logs are not currently billed. In future, VNet flow logs will be charged per gigabyte of "Network Logs Collected" and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with VNet flow logs, then existing traffic analytics pricing is applicable. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+
+## Availability
+
+VNet flow logs is available in the following regions during the preview:
+
+- East US 2 EUAP
+- Central US EUAP
+- West Central US
+- East US
+- East US 2
+- West US
+- West US 2
+
+To sign up to obtain access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup).
+
+## Next steps
+
+- To learn how to create, change, enable, disable, or delete VNet flow logs, see [PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md) VNet flow logs articles.
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md) and [Traffic analytics schema](traffic-analytics-schema.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
+++
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
+
+ Title: Manage VNet flow logs - PowerShell
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using Azure PowerShell.
++++ Last updated : 08/16/2023+++
+# Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure CLI](vnet-flow-logs-cli.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using PowerShell](../virtual-network/quick-create-powershell.md).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell).
+
+- PowerShell environment in [Azure Cloud Shell](https://shell.azure.com) or Azure PowerShell installed locally. To learn more about using PowerShell in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - PowerShell](../cloud-shell/quickstart-powershell.md).
+
+ - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell version 7.4.0 or later. Run `Get-InstalledModule -Name Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). Run `Connect-AzAccount` to sign in to Azure.
+
+## Register insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) to register it.
+
+```azurepowershell-interactive
+# Register Microsoft.Insights provider.
+Register-AzResourceProvider -ProviderNamespace Microsoft.Insights
+```
+
+## Enable VNet flow logs
+
+Use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log.
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup
+
+# Create a VNet flow log.
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Enable VNet flow logs and traffic analytics
+
+Use [New-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace) to create a traffic analytics workspace, and then use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log that uses it.
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup
+
+# Create a traffic analytics workspace and place its configuration into a variable.
+$workspace = New-AzOperationalInsightsWorkspace -Name myWorkspace -ResourceGroupName myResourceGroup -Location EastUS
+
+# Create a VNet flow log.
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -EnableTrafficAnalytics -TrafficAnalyticsWorkspaceId $workspace.ResourceId -TrafficAnalyticsInterval 10
+```
+
+## List all flow logs in a region
+
+Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to list all flow log resources in a particular region in your subscription.
+
+```azurepowershell-interactive
+# Get all flow logs in East US region.
+Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG | format-table Name
+```
+
+## View VNet flow log resource
+
+Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to see details of a flow log resource.
+
+```azurepowershell-interactive
+# Get the flow log details.
+Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -Name myVNetFlowLog
+```
+
+## Download a flow log
+
+To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+VNet flow log files saved to a storage account follow the logging path shown in the following example:
+
+```
+https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+```
+
+## Disable traffic analytics on flow log resource
+
+To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to storage account, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage
+
+# Update the VNet flow log.
+Set-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Disable VNet flow logging
+
+To disable a VNet flow log without deleting it so you can re-enable it later, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage
+
+# Disable the VNet flow log.
+Set-AzNetworkWatcherFlowLog -Enabled $false -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Delete a VNet flow log resource
+
+To delete a VNet flow log resource, use [Remove-AzNetworkWatcherFlowLog](/powershell/module/az.network/remove-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Delete the VNet flow log.
+Remove-AzNetworkWatcherFlowLog -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG
+```
+
+## Next steps
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
operator-nexus Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-security.md
+
+ Title: "Azure Operator Nexus: Security concepts"
+description: Security overview for Azure Operator Nexus.
++++ Last updated : 08/14/2023+++
+# Azure Operator Nexus security
+
+Azure Operator Nexus is designed and built to both detect and defend against
+the latest security threats and comply with the strict requirements of government
+and industry security standards. Two cornerstones form the foundation of its
+security architecture:
+
+* **Security by default** - Security resiliency is an inherent part of the platform with little to no configuration changes needed to use it securely.
+* **Assume breach** - The underlying assumption is that any system can be compromised, and as such the goal is to minimize the impact of a security breach if one occurs.
+
+Azure Operator Nexus realizes the above by leveraging Microsoft cloud-native security tools that give you the ability to improve your cloud security posture while allowing you to protect your Operator workloads.
+
+## Platform-wide protection via Microsoft Defender for Cloud
+
+[Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) is a cloud-native application protection platform (CNAPP) that provides the security capabilities needed to harden your resources, manage your security posture, protect against cyberattacks, and streamline security management. These are some of the key features of Defender for Cloud that apply to the Azure Operator Nexus platform:
+
+* **Vulnerability assessment for virtual machines and container registries** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud.
+* **Hybrid cloud security** ΓÇô Get a unified view of security across all your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions.
+* **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyberattacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, Azure Storage and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence.
+* **Compliance assessment against a variety of security standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in Azure Security Benchmark. When you enable the advanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organizationΓÇÖs needs. Add standards and track your compliance with them from the regulatory compliance dashboard.
+* **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments.
+
+There are enhanced security options that let you protect your on-premises host servers as well as the Kubernetes clusters that run your Operator workloads. These options are described below.
+
+## BMM host operating system protection via Microsoft Defender for Endpoint
+
+Azure Operator Nexus bare-metal machines (BMMs), which host the on-premises infrastructure compute servers, are protected when you elect to enable the [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) solution. Microsoft Defender for Endpoint provides preventative antivirus (AV), endpoint detection and response (EDR), and vulnerability management capabilities.
+
+You have the option to enable Microsoft Defender for Endpoint protection once you have selected and activated a [Microsoft Defender for Servers](../defender-for-cloud/tutorial-enable-servers-plan.md) plan, as Defender for Servers plan activation is a pre-requisite for Microsoft Defender for Endpoint. Once enabled, the Microsoft Defender for Endpoint configuration is managed by the platform to ensure optimal security and performance, and to reduce the risk of misconfigurations.
+
+## Workload Kubernetes cluster protection via Microsoft Defender for Containers
+
+On-premises Kubernetes clusters that run your Operator workloads are protected when you elect to enable the Microsoft Defender for Containers solution. [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) provides run-time threat protection for clusters and Linux nodes as well as cluster environment hardening against misconfigurations.
+
+You have the option to enable Defender for Containers protection within Defender for Cloud by activating the Defender for Containers plan.
+
+## Cloud security is a shared responsibility
+
+It is important to understand that in a cloud environment, security is a [shared responsibility](../security/fundamentals/shared-responsibility.md) between you and the cloud provider. The responsibilities vary depending on the type of cloud service your workloads run on, whether it is Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS), as well as where the workloads are hosted ΓÇô within the cloud providerΓÇÖs or your own on-premises datacenters.
+
+Azure Operator Nexus workloads run on servers in your datacenters, so you are in control of changes to your on-premises environment. Microsoft periodically makes new platform releases available that contain security and other updates. You must then decide when to apply these releases to your environment as appropriate for your organizationΓÇÖs business needs.
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
Previously updated : 08/01/2023 Last updated : 08/21/2023 #
Example output:
Name Version -- - monitor-control-service 0.2.0
-connectedmachine 0.5.1
-connectedk8s 1.3.20
+connectedmachine 0.6.0
+connectedk8s 1.4.0
k8s-extension 1.4.2
-networkcloud 1.0.0b2
+networkcloud 1.0.0
k8s-configuration 1.7.0
-managednetworkfabric 3.1.0
+managednetworkfabric 3.2.0
customlocation 0.1.3 ssh 2.0.1 ```
operator-nexus Quickstarts Kubernetes Cluster Deployment Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-cli.md
Before you run the commands, you need to set several variables to define the con
| SERVICE_CIDR | The network range for the Kubernetes services in the cluster, in CIDR notation. | | DNS_SERVICE_IP | The IP address for the Kubernetes DNS service. | - Once you've defined these variables, you can run the Azure CLI command to create the cluster. Add the ```--debug``` flag at the end to provide more detailed output for troubleshooting purposes. To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example: ```bash RESOURCE_GROUP="myResourceGroup"
-LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')"
-SUBSCRIPTION_ID="$(az account show -o tsv --query id)"
+SUBSCRIPTION_ID="<Azure subscription ID>"
+LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION_ID -o tsv)"
CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>" CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>" CNI_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
POD_CIDR="10.244.0.0/16"
SERVICE_CIDR="10.96.0.0/16" DNS_SERVICE_IP="10.96.0.10" ```+ > [!IMPORTANT] > It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, CNI_ARM_ID, and AAD_ADMIN_GROUP_OBJECT_ID with your actual values before running these commands.
After defining these variables, you can create the Kubernetes cluster by executi
```azurecli az networkcloud kubernetescluster create \name "${CLUSTER_NAME}" \resource-group "${RESOURCE_GROUP}" \subscription "${SUBSCRIPTION_ID}" \extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \location "${LOCATION}" \kubernetes-version "${K8S_VERSION}" \aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \admin-username "${ADMIN_USERNAME}" \ssh-key-values "${SSH_PUBLIC_KEY}" \control-plane-node-configuration \
+ --name "${CLUSTER_NAME}" \
+ --resource-group "${RESOURCE_GROUP}" \
+ --subscription "${SUBSCRIPTION_ID}" \
+ --extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \
+ --location "${LOCATION}" \
+ --kubernetes-version "${K8S_VERSION}" \
+ --aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \
+ --admin-username "${ADMIN_USERNAME}" \
+ --ssh-key-values "${SSH_PUBLIC_KEY}" \
+ --control-plane-node-configuration \
count="${CONTROL_PLANE_COUNT}" \ vm-sku-name="${CONTROL_PLANE_VM_SIZE}" \initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \network-configuration \
+ --initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \
+ --network-configuration \
cloud-services-network-id="${CSN_ARM_ID}" \ cni-network-id="${CNI_ARM_ID}" \ pod-cidrs="[${POD_CIDR}]" \
After a few minutes, the command completes and returns information about the clu
[!INCLUDE [quickstart-cluster-connect](./includes/kubernetes-cluster/quickstart-cluster-connect.md)] ## Add an agent pool+ The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ```az networkcloud kubernetescluster agentpool create``` command. The following example creates an agent pool named ```myNexusAKSCluster-nodepool-2```: You can also use the default values for some of the variables, as shown in the following example:
AGENT_POOL_VM_SIZE="NC_M4_v1"
AGENT_POOL_COUNT="1" AGENT_POOL_MODE="User" ```+ After defining these variables, you can add an agent pool by executing the following Azure CLI command: ```azurecli
az networkcloud kubernetescluster agentpool create \
--name "${AGENT_POOL_NAME}" \ --kubernetes-cluster-name "${CLUSTER_NAME}" \ --resource-group "${RESOURCE_GROUP}" \
+ --subscription "${SUBSCRIPTION_ID}" \
--extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \ --count "${AGENT_POOL_COUNT}" \ --mode "${AGENT_POOL_MODE}" \
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
To define these variables, use the following set commands and replace the exampl
```bash # Azure parameters RESOURCE_GROUP="myResourceGroup"
-SUBSCRIPTION="$(az account show -o tsv --query id)"
+SUBSCRIPTION="<Azure subscription ID>"
CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
-LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')"
+LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION -o tsv)"
# VM parameters VM_NAME="myNexusVirtualMachine"
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
With Azure Orbital Ground Station, you can focus on your missions by off-loading
Azure Orbital Ground Station uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions. ## Earth Observation with Azure Orbital Ground Station
For a full end-to-end solution to manage fleet operations and "Telemetry, Tracki
* Direct data ingestion into Azure * Marketplace integration with third-party data processing and image calibration services * Integrated cloud modems for X and S bands
- * Global reach through integrated third-party networks
+ * Global reach through first-party and integrated third-party networks
+ ## Links to learn more - [Overview, features, security, and FAQ](https://azure.microsoft.com/products/orbital/#layout-container-uid189e)
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
# Tutorial: Process Aqua satellite data using NASA-provided tools
+> [!NOTE]
+> NASA has deprecated support of the DRL software used to process Aqua satellite imagery. Please see: [DRL Current Status](https://directreadout.sci.gsfc.nasa.gov/home.html). Steps 2, 3, and 4 of this tutorial are no longer relevant but presented for informational purposes only.
+ This article is a comprehensive walk-through showing how to use the [Azure Orbital Ground Station (AOGS)](https://azure.microsoft.com/services/orbital/) to capture and process satellite imagery. It introduces the AOGS and its core concepts and shows how to schedule contacts. The article also steps through an example in which we collect and process NASA Aqua satellite data in an Azure virtual machine (VM) using NASA-provided tools. Aqua is a polar-orbiting spacecraft launched by NASA in 2002. Data from all science instruments aboard Aqua is downlinked to the Earth using direct broadcast over the X-band in near real-time. More information about Aqua can be found on the [Aqua Project Science](https://aqua.nasa.gov/) website.
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
Run the following commands at the PowerShell prompt, specifying the object ID yo
```powershell Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}+
+Invoke-Command -Session $minishellSession -ScriptBlock {Enable-HcsAzureKubernetesService -f}
``` Once you've run this command, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image. :::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview.png" alt-text="Screenshot of configuration menu, with Kubernetes (Preview) highlighted.":::
-Select the **This Kubernetes cluster is for Azure Private 5G Core or SAP Digital Manufacturing Cloud workloads** checkbox.
-- If you go to the Azure portal and navigate to your **Azure Stack Edge** resource, you should see an **Azure Kubernetes Service** option. You'll set up the Azure Kubernetes Service in [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc). :::image type="content" source="media/commission-cluster/commission-cluster-ase-resource.png" alt-text="Screenshot of Azure Stack Edge resource in the Azure portal. Azure Kubernetes Service (PREVIEW) is shown under Edge services in the left menu.":::
The Azure Private 5G Core private mobile network requires a custom location and
1. Create the Network Function Operator Kubernetes extension: ```azurecli
- Add-Content -Path $TEMP_FILE -Value @"
+ cat > $TEMP_FILE <<EOF
{ "helm.versions": "v3", "Microsoft.CustomLocation.ServiceAccount": "azurehybridnetwork-networkfunction-operator",
The Azure Private 5G Core private mobile network requires a custom location and
"helm.release-namespace": "azurehybridnetwork", "managed-by": "helm" }
- "@
+ EOF
``` ```azurecli
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Get access to Azure Private 5G Core for your Azure subscription
-Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
+Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://forms.office.com/r/4Q1yNRakXe).
## Choose the core technology type (5G or 4G)
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
In this step, you'll roll back your packet core using a REST API request. Follow
If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
-> [!NOTE]
-> You can roll back your packet core instance to version [PMN-2211-0](azure-private-5g-core-release-notes-2211.md) or later.
- 1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Perform a [rollback POST request](/rest/api/mobilenetwork/packet-core-control-planes/rollback?tabs=HTTP).
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
If you encountered issues after the upgrade, you can roll back the packet core i
If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced.
-> [!NOTE]
-> You can roll back your packet core instance to version [PMN-2211-0](azure-private-5g-core-release-notes-2211.md) or later.
- 1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Navigate to the **Packet Core Control Plane** resource that you want to roll back as described in [View the current packet core version](#view-the-current-packet-core-version). 1. Select **Rollback version**.
If any of the configuration you set while your packet core instance was running
:::image type="content" source="media/upgrade-packet-core-azure-portal/confirm-rollback.png" alt-text="Screenshot of the Azure portal showing the Confirm rollback field in the Rollback packet core screen."::: 1. Select **Roll back packet core**.
-1. Azure will now redeploy the packet core instance at the new software version. You can check the latest status of the rollback by looking at the **Packet core installation state** field. The **Packet Core Control Plane** resource's overview page will refresh every 20 seconds, and you can select **Refresh** to trigger a manual update. The **Packet core installation state** field will show as **RollingBack** during the rollback and update to **Installed** when the process completes.
+1. Azure will now redeploy the packet core instance at the previous software version. You can check the latest status of the rollback by looking at the **Packet core installation state** field. The **Packet Core Control Plane** resource's overview page will refresh every 20 seconds, and you can select **Refresh** to trigger a manual update. The **Packet core installation state** field will show as **RollingBack** during the rollback and update to **Installed** when the process completes.
1. Follow the steps in [Restore backed up deployment information](#restore-backed-up-deployment-information) to reconfigure your deployment. 1. Follow the steps in [Verify upgrade](#verify-upgrade) to check if the rollback was successful.
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
Title: 'Use Azure Firewall to inspect traffic destined to a private endpoint'
+ Title: 'Azure Firewall scenarios to inspect traffic destined to a private endpoint'
-description: Learn how you can inspect traffic destined to a private endpoint using Azure Firewall.
+description: Learn about different scenarios to inspect traffic destined to a private endpoint using Azure Firewall.
- Previously updated : 04/27/2023+ Last updated : 08/14/2023
-# Use Azure Firewall to inspect traffic destined to a private endpoint
+# Azure Firewall scenarios to inspect traffic destined to a private endpoint
> [!NOTE] > If you want to secure traffic to private endpoints in Azure Virtual WAN using secured virtual hub, see [Secure traffic destined to private endpoints in Azure Virtual WAN](../firewall-manager/private-link-inspection-secure-virtual-hub.md).
If your security requirements require client traffic to services exposed via pri
The same considerations as in scenario 2 above apply. In this scenario, there aren't virtual network peering charges. For more information about how to configure your DNS servers to allow on-premises workloads to access private endpoints, see [on-premises workloads using a DNS forwarder](./private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
-## Prerequisites
-
-* An Azure subscription.
-
-* A Log Analytics workspace.
-
-See, [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md) to create a workspace if you don't have one in your subscription.
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a VM
-
-In this section, you create a virtual network and subnet to host the VM used to access your private link resource. An Azure SQL database is used as the example service.
-
-## Virtual networks and parameters
-
-Create three virtual networks and their corresponding subnets to:
-
-* Contain the Azure Firewall used to restrict communication between the VM and the private endpoint.
-
-* Host the VM that is used to access your private link resource.
-
-* Host the private endpoint.
-
-Replace the following parameters in the steps with the following information:
-
-### Azure Firewall network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myAzFwVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.0.0.0/16 |
-| **\<subnet-name>** | AzureFirewallSubnet |
-| **\<subnet-address-range>** | 10.0.0.0/24 |
-
-### Virtual machine network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myVMVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.1.0.0/16 |
-| **\<subnet-name>** | VMSubnet |
-| **\<subnet-address-range>** | 10.1.0.0/24 |
-
-### Private endpoint network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myPEVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.2.0.0/16 |
-| **\<subnet-name>** | PrivateEndpointSubnet |
-| **\<subnet-address-range>** | 10.2.0.0/24 |
--
-10. Repeat steps 1 to 9 to create the virtual networks for hosting the virtual machine and private endpoint resources.
-
-### Create virtual machine
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this resource group in the previous section. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select **(US) South Central US**. |
- | Availability options | Leave the default **No infrastructure redundancy required**. |
- | Image | Select **Ubuntu Server 18.04 LTS - Gen1**. |
- | Size | Select **Standard_B2s**. |
- | **Administrator account** | |
- | Authentication type | Select **Password**. |
- | Username | Enter a username of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/linux/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- | Confirm Password | Reenter password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
--
-3. Select **Next: Disks**.
-
-4. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
-
-5. In **Create a virtual machine - Networking**, select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Select **myVMVNet**. |
- | Subnet | Select **VMSubnet (10.1.0.0/24)**.|
- | Public IP | Leave the default **(new) myVm-ip**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **SSH**.|
- ||
-
-6. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-7. When you see the **Validation passed** message, select **Create**.
--
-## Deploy the Firewall
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Type **firewall** in the search box and press **Enter**.
-
-3. Select **Firewall** and then select **Create**.
-
-4. On the **Create a Firewall** page, use the following table to configure the firewall:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myAzureFirewall**. |
- | Region | Select **South Central US**. |
- | Availability zone | Leave the default **None**. |
- | Choose a virtual network | Select **Use Existing**. |
- | Virtual network | Select **myAzFwVNet**. |
- | Public IP address | Select **Add new** and in Name enter **myFirewall-ip**. |
- | Forced tunneling | Leave the default **Disabled**. |
- |||
-5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-6. When you see the **Validation passed** message, select **Create**.
-
-## Enable firewall logs
-
-In this section, you enable the logs on the firewall.
-
-1. In the Azure portal, select **All resources** in the left-hand menu.
-
-2. Select the firewall **myAzureFirewall** in the list of resources.
-
-3. Under **Monitoring** in the firewall settings, select **Diagnostic settings**
-
-4. Select **+ Add diagnostic setting** in the Diagnostic settings.
-
-5. In **Diagnostics setting**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Diagnostic setting name | Enter **myDiagSetting**. |
- | Category details | |
- | log | Select **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule**. |
- | Destination details | Select **Send to Log Analytics**. |
- | Subscription | Select your subscription. |
- | Log Analytics workspace | Select your Log Analytics workspace. |
-
-6. Select **Save**.
-
-## Create Azure SQL database
-
-In this section, you create a private SQL Database.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **SQL Database**.
-
-2. In **Create SQL Database - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this resource group in the previous section.|
- | **Database details** | |
- | Database name | Enter **mydatabase**. |
- | Server | Select **Create new** and enter the following information. |
- | Server name | Enter **mydbserver**. If this name is taken, enter a unique name. |
- | Server admin sign in | Enter a name of your choosing. |
- | Password | Enter a password of your choosing. |
- | Confirm Password | Reenter password |
- | Location | Select **(US) South Central US**. |
- | Want to use SQL elastic pool | Leave the default **No**. |
- | Compute + storage | Leave the default **General Purpose Gen5, 2 vCores, 32 GB Storage**. |
- |||
-
-3. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-4. When you see the **Validation passed** message, select **Create**.
-
-## Create private endpoint
-
-In this section, you create a private endpoint for the Azure SQL database in the previous section.
-
-1. In the Azure portal, select **All resources** in the left-hand menu.
-
-2. Select the Azure SQL server **mydbserver** in the list of services. If you used a different server name, choose that name.
-
-3. In the server settings, select **Private endpoint connections** under **Security**.
-
-4. Select **+ Private endpoint**.
-
-5. In **Create a private endpoint**, enter or select this information in the **Basics** tab:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Region | Select **(US) South Central US.** |
-
-6. Select the **Resource** tab or select **Next: Resource** at the bottom of the page.
-
-7. In the **Resource** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Connection method | Select **Connect to an Azure resource in my directory**. |
- | Subscription | Select your subscription. |
- | Resource type | Select **Microsoft.Sql/servers**. |
- | Resource | Select **mydbserver** or the name of the server you created in the previous step.
- | Target subresource | Select **sqlServer**. |
-
-8. Select the **Configuration** tab or select **Next: Configuration** at the bottom of the page.
-
-9. In the **Configuration** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Networking** | |
- | Virtual network | Select **myPEVnet**. |
- | Subnet | Select **PrivateEndpointSubnet**. |
- | **Private DNS integration** | |
- | Integrate with private DNS zone | Select **Yes**. |
- | Subscription | Select your subscription. |
- | Private DNS zones | Leave the default **privatelink.database.windows.net**. |
-
-10. Select the **Review + create** tab or select **Review + create** at the bottom of the page.
-
-11. Select **Create**.
-
-12. After the endpoint is created, select **Firewalls and virtual networks** under **Security**.
-
-13. In **Firewalls and virtual networks**, select **Yes** next to **Allow Azure services and resources to access this server**.
-
-14. Select **Save**.
-
-## Connect the virtual networks using virtual network peering
-
-In this section, we connect virtual networks **myVMVNet** and **myPEVNet** to **myAzFwVNet** using peering. There isn't direct connectivity between **myVMVNet** and **myPEVNet**.
-
-1. In the portal's search bar, enter **myAzFwVNet**.
-
-2. Select **Peerings** under **Settings** menu and select **+ Add**.
-
-3. In **Add Peering** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myVMVNet**. |
- | **Peer details** | |
- | Virtual network deployment model | Leave the default **Resource Manager**. |
- | I know my resource ID | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myVMVNet**. |
- | Name of the peering from remote virtual network to myAzFwVNet | Enter **myVMVNet-to-myAzFwVNet**. |
- | **Configuration** | |
- | **Configure virtual network access settings** | |
- | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. |
- | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. |
- | **Configure forwarded traffic settings** | |
- | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. |
- | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. |
- | **Configure gateway transit settings** | |
- | Allow gateway transit | Leave unchecked |
-
-4. Select **OK**.
-
-5. Select **+ Add**.
-
-6. In **Add Peering** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myPEVNet**. |
- | **Peer details** | |
- | Virtual network deployment model | Leave the default **Resource Manager**. |
- | I know my resource ID | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myPEVNet**. |
- | Name of the peering from remote virtual network to myAzFwVNet | Enter **myPEVNet-to-myAzFwVNet**. |
- | **Configuration** | |
- | **Configure virtual network access settings** | |
- | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. |
- | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. |
- | **Configure forwarded traffic settings** | |
- | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. |
- | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. |
- | **Configure gateway transit settings** | |
- | Allow gateway transit | Leave unchecked |
-
-7. Select **OK**.
-
-## Link the virtual networks to the private DNS zone
-
-In this section, we link virtual networks **myVMVNet** and **myAzFwVNet** to the **privatelink.database.windows.net** private DNS zone. This zone was created when we created the private endpoint.
-
-The link is required for the VM and firewall to resolve the FQDN of database to its private endpoint address. Virtual network **myPEVNet** was automatically linked when the private endpoint was created.
-
->[!NOTE]
->If you don't link the VM and firewall virtual networks to the private DNS zone, both the VM and firewall will still be able to resolve the SQL Server FQDN. They will resolve to its public IP address.
-
-1. In the portal's search bar, enter **privatelink.database**.
-
-2. Select **privatelink.database.windows.net** in the search results.
-
-3. Select **Virtual network links** under **Settings**.
-
-4. Select **+ Add**
-
-5. In **Add virtual network link** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Link name | Enter **Link-to-myVMVNet**. |
- | **Virtual network details** | |
- | I know the resource ID of virtual network | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myVMVNet**. |
- | **CONFIGURATION** | |
- | Enable auto registration | Leave unchecked. |
-
-6. Select **OK**.
-
-## Configure an application rule with SQL FQDN in Azure Firewall
-
-In this section, configure an application rule to allow communication between **myVM** and the private endpoint for SQL Server **mydbserver.database.windows.net**.
-
-This rule allows communication through the firewall that we created in the previous steps.
-
-1. In the portal's search bar, enter **myAzureFirewall**.
-
-2. Select **myAzureFirewall** in the search results.
-
-3. Select **Rules** under **Settings** in the **myAzureFirewall** overview.
-
-4. Select the **Application rule collection** tab.
-
-5. Select **+ Add application rule collection**.
-
-6. In **Add application rule collection** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Priority | Enter **100**. |
- | Action | Enter **Allow**. |
- | **Rules** | |
- | **FQDN tags** | |
- | Name | Leave blank. |
- | Source type | Leave the default **IP address**. |
- | Source | Leave blank. |
- | FQDN tags | Leave the default **0 selected**. |
- | **Target FQDNs** | |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Source type | Leave the default **IP address**. |
- | Source | Enter **10.1.0.0/16**. |
- | Protocol: Port | Enter **mssql:1433**. |
- | Target FQDNs | Enter **mydbserver.database.windows.net**. |
-
-7. Select **Add**.
-
-## Route traffic between the virtual machine and private endpoint through Azure Firewall
-
-We didn't create a virtual network peering directly between virtual networks **myVMVNet** and **myPEVNet**. The virtual machine **myVM** doesn't have a route to the private endpoint we created.
-
-In this section, we create a route table with a custom route.
-
-The route sends traffic from the **myVM** subnet to the address space of virtual network **myPEVNet**, through the Azure Firewall.
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Type **route table** in the search box and press **Enter**.
-
-3. Select **Route table** and then select **Create**.
-
-4. On the **Create Route table** page, use the following table to configure the route table:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Region | Select **South Central US**. |
- | Name | Enter **VMsubnet-to-AzureFirewall**. |
- | Propagate gateway routes | Select **No**. |
-
-5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-6. When you see the **Validation passed** message, select **Create**.
-
-7. Once the deployment completes select **Go to resource**.
-
-8. Select **Routes** under **Settings**.
-
-9. Select **+ Add**.
-
-10. On the **Add route** page, enter, or select this information:
-
- | Setting | Value |
- | - | -- |
- | Route name | Enter **myVMsubnet-to-privateendpoint**. |
- | Address prefix | Enter **10.2.0.0/16**. |
- | Next hop type | Select **Virtual appliance**. |
- | Next hop address | Enter **10.0.0.4**. |
-
-11. Select **OK**.
-
-12. Select **Subnets** under **Settings**.
-
-13. Select **+ Associate**.
-
-14. On the **Associate subnet** page, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Select **myVMVNet**. |
- | Subnet | Select **VMSubnet**. |
-
-15. Select **OK**.
-
-## Connect to the virtual machine from your client computer
-
-Connect to the VM **myVm** from the internet as follows:
-
-1. In the portal's search bar, enter **myVm-ip**.
-
-2. Select **myVM-ip** in the search results.
-
-3. Copy or write down the value under **IP address**.
-
-4. If you're using Windows 10, run the following command using PowerShell. For other Windows client versions, use an SSH client like [Putty](https://www.putty.org/):
-
-* Replace **username** with the admin username you entered during VM creation.
-
-* Replace **IPaddress** with the IP address from the previous step.
-
- ```bash
- ssh username@IPaddress
- ```
-
-5. Enter the password you defined when creating **myVm**
-
-## Access SQL Server privately from the virtual machine
-
-In this section, you connect privately to the SQL Database using the private endpoint.
-
-1. Enter `nslookup mydbserver.database.windows.net`
-
- You receive a message similar to the following output:
-
- ```output
- Server: 127.0.0.53
- Address: 127.0.0.53#53
-
- Non-authoritative answer:
- mydbserver.database.windows.net canonical name = mydbserver.privatelink.database.windows.net.
- Name: mydbserver.privatelink.database.windows.net
- Address: 10.2.0.4
- ```
-
-2. Install [SQL Server command-line tools](/sql/linux/quickstart-install-connect-ubuntu#tools).
-
-3. Run the following command to connect to the SQL Server. Use the server admin and password you defined when you created the SQL Server in the previous steps.
-
-* Replace **\<ServerAdmin>** with the admin username you entered during the SQL server creation.
-
-* Replace **\<YourPassword>** with the admin password you entered during SQL server creation.
-
- ```bash
- sqlcmd -S mydbserver.database.windows.net -U '<ServerAdmin>' -P '<YourPassword>'
- ```
-4. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
-
-5. Close the connection to **myVM** by entering **exit**.
-
-## Validate the traffic in Azure Firewall logs
-
-1. In the Azure portal, select **All Resources** and select your Log Analytics workspace.
-
-2. Select **Logs** under **General** in the Log Analytics workspace page.
-
-3. Select the blue **Get Started** button.
-
-4. In the **Example queries** window, select **Firewalls** under **All Queries**.
-
-5. Select the **Run** button under **Application rule log data**.
-
-6. In the log query output, verify **mydbserver.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **RuleCollection**.
-
-## Clean up resources
-
-When you're done using the resources, delete the resource group and all of the resources it contains:
-
-1. Enter **myResourceGroup** in the **Search** box at the top of the portal and select **myResourceGroup** from the search results.
-
-1. Select **Delete resource group**.
-
-1. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
- ## Next steps
-In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall.
+In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall.
-You connected to the VM and securely communicated to the database through Azure Firewall using private link.
+For a tutorial on how to configure Azure Firewall to inspect traffic destined to a private endpoint, see [Tutorial: Inspect private endpoint traffic with Azure Firewall](tutorial-inspect-traffic-azure-firewall.md)
To learn more about private endpoint, see [What is Azure Private Endpoint?](private-endpoint-overview.md).
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Data Factory | Microsoft.DataFactory/factories | dataFactory | | Azure Data Explorer | Microsoft.Kusto/clusters | cluster | | Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer |
-| Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for MySQL - Single Server | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for MySQL- Flexible Server | Microsoft.DBforMySQL/flexibleServers | mysqlServer |
| Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer | | Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps | | Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub |
private-link Tutorial Inspect Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md
+
+ Title: 'Tutorial: Inspect private endpoint traffic with Azure Firewall'
+description: Learn how to inspect private endpoint traffic with Azure Firewall.
+++++ Last updated : 08/15/2023++
+# Tutorial: Inspect private endpoint traffic with Azure Firewall
+
+Azure Private Endpoint is the fundamental building block for Azure Private Link. Private endpoints enable Azure resources deployed in a virtual network to communicate privately with private link resources.
+
+Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity.
+
+You may need to inspect or block traffic from clients to the services exposed via private endpoints. Complete this inspection by using [Azure Firewall](../firewall/overview.md) or a third-party network virtual appliance.
+
+For more information and scenarios that involve private endpoints and Azure Firewall, see [Azure Firewall scenarios to inspect traffic destined to a private endpoint](inspect-traffic-with-azure-firewall.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network and bastion host for the test virtual machine.
+> * Create the private endpoint virtual network.
+> * Create a test virtual machine.
+> * Deploy Azure Firewall.
+> * Create an Azure SQL database.
+> * Create a private endpoint for Azure SQL.
+> * Create a network peer between the private endpoint virtual network and the test virtual machine virtual network.
+> * Link the virtual networks to a private DNS zone.
+> * Configure application rules in Azure Firewall for Azure SQL.
+> * Route traffic between the test virtual machine and Azure SQL through Azure Firewall.
+> * Test the connection to Azure SQL and validate in Azure Firewall logs.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+
+- A Log Analytics workspace. For more information about the creation of a log analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+
+## Sign in to the Azure portal
+
+Sign in to the [Azure portal](https://portal.azure.com).
++++
+## Deploy Azure Firewall
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+1. In **Firewalls**, select **+ Create**.
+
+1. Enter or select the following information in the **Basics** tab of **Create a firewall**:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Name | Enter **firewall**. |
+ | Region | Select **East US 2**. |
+ | Availability zone | Select **None**. |
+ | Firewall SKU | Select **Standard**. |
+ | Firewall management | Select **Use a Firewall Policy to manage this firewall**. |
+ | Firewall policy | Select **Add new**. </br> Enter **firewall-policy** in **Policy name**. </br> Select **East US 2** in region. </br> Select **OK**. |
+ | Choose a virtual network | Select **Create new**. |
+ | Virtual network name | Enter **vnet-firewall**. |
+ | Address space | Enter **10.2.0.0/16**. |
+ | Subnet address space | Enter **10.2.1.0/26**. |
+ | Public IP address | Select **Add new**. </br> Enter **public-ip-firewall** in **Name**. </br> Select **OK**. |
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+Wait for the firewall deployment to complete before you continue.
+
+## Enable firewall logs
+
+In this section, you enable the firewall logs and send them to the log analytics workspace.
+
+> [!NOTE]
+> You must have a log analytics workspace in your subscription before you can enable firewall logs. For more information, see [Prerequisites](#prerequisites).
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+1. Select **firewall**.
+
+1. In **Monitoring** select **Diagnostic settings**.
+
+1. Select **+ Add diagnostic setting**.
+
+1. In **Diagnostic setting** enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Diagnostic setting name | Enter **diagnostic-setting-firewall**. |
+ | **Logs** | |
+ | Categories | Select **Azure Firewall Application Rule (Legacy Azure Diagnostics)** and **Azure Firewall Network Rule (Legacy Azure Diagnostics)**. |
+ | **Destination details** | |
+ | Destination | Select **Send to Log Analytics workspace**. |
+ | Subscription | Select your subscription. |
+ | Log Analytics workspace | Select your log analytics workspace. |
+
+1. Select **Save**.
+
+## Create an Azure SQL database
+
+1. In the search box at the top of the portal, enter **SQL**. Select **SQL databases** in the search results.
+
+1. In **SQL databases**, select **+ Create**.
+
+1. In the **Basics** tab of **Create SQL Database**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Database details** | |
+ | Database name | Enter **sql-db**. |
+ | Server | Select **Create new**. </br> Enter **sql-server-1** in **Server name** (Server names must be unique, replace **sql-server-1** with a unique value). </br> Select **(US) East US 2** in **Location**. </br> Select **Use SQL authentication**. </br> Enter a server admin sign-in and password. </br> Select **OK**. |
+ | Want to use SQL elastic pool? | Select **No**. |
+ | Workload environment | Leave the default of **Production**. |
+ | **Backup storage redundancy** | |
+ | Backup storage redundancy | Select **Locally redundant backup storage**. |
+
+1. Select **Next: Networking**.
+
+1. In the **Networking** tab of **Create SQL Database**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Network connectivity** | |
+ | Connectivity method | Select **Private endpoint**. |
+ | **Private endpoints** | |
+ | Select **+Add private endpoint**. | |
+ | **Create private endpoint** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | Location | Select **East US 2**. |
+ | Name | Enter **private-endpoint-sql**. |
+ | Target subresource | Select **SqlServer**. |
+ | **Networking** | |
+ | Virtual network | Select **vnet-private-endpoint**. |
+ | Subnet | Select **subnet-private-endpoint**. |
+ | **Private DNS integration** | |
+ | Integrate with private DNS zone | Select **Yes**. |
+ | Private DNS zone | Leave the default of **privatelink.database.windows.net**. |
+
+1. Select **OK**.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+## Connect virtual networks with virtual network peering
+
+In this section, you connect the virtual networks with virtual network peering. The networks **vnet-1** and **vnet-private-endpoint** are connected to **vnet-firewall**. There isn't direct connectivity between **vnet-1** and **vnet-private-endpoint**.
+
+1. In the search box at the top of the portal, enter **Virtual networks**. Select **Virtual networks** in the search results.
+
+1. Select **vnet-firewall**.
+
+1. In **Settings** select **Peerings**.
+
+1. In **Peerings** select **+ Add**.
+
+1. In **Add peering**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **This virtual network** | |
+ | Peering link name | Enter **vnet-firewall-to-vnet-1**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **vnet-1-to-vnet-firewall**. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-1**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+
+1. Select **Add**.
+
+1. In **Peerings** select **+ Add**.
+
+1. In **Add peering**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **This virtual network** | |
+ | Peering link name | Enter **vnet-firewall-to-vnet-private-endpoint**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **vnet-private-endpoint-to-vnet-firewall**. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-private-endpoint**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+
+1. Select **Add**.
+
+1. Verify the **Peering status** displays **Connected** for both network peers.
+
+## Link the virtual networks to the private DNS zone
+
+The private DNS zone created during the private endpoint creation in the previous section must be linked to the **vnet-1** and **vnet-firewall** virtual networks.
+
+1. In the search box at the top of the portal, enter **Private DNS zone**. Select **Private DNS zones** in the search results.
+
+1. Select **privatelink.database.windows.net**.
+
+1. In **Settings** select **Virtual network links**.
+
+1. Select **+ Add**.
+
+1. In **Add virtual network link**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Virtual network link** | |
+ | Virtual network link name | Enter **link-to-vnet-1**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-1 (test-rg)**. |
+ | Configuration | Leave the default of unchecked for **Enable auto registration**. |
+
+1. Select **OK**.
+
+1. Select **+ Add**.
+
+1. In **Add virtual network link**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Virtual network link** | |
+ | Virtual network link name | Enter **link-to-vnet-firewall**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-firewall (test-rg)**. |
+ | Configuration | Leave the default of unchecked for **Enable auto registration**. |
+
+1. Select **OK**.
+
+## Create route between vnet-1 and vnet-private-endpoint
+
+A network link between **vnet-1** and **vnet-private-endpoint** doesn't exist. You must create a route to allow traffic to flow between the virtual networks through Azure Firewall.
+
+The route sends traffic from **vnet-1** to the address space of virtual network **vnet-private-endpoint**, through the Azure Firewall.
+
+1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results.
+
+1. Select **+ Create**.
+
+1. In the **Basics** tab of **Create Route table**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Region | Select **East US 2**. |
+ | Name | Enter **vnet-1-to-vnet-firewall**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results.
+
+1. Select **vnet-1-to-vnet-firewall**.
+
+1. In **Settings** select **Routes**.
+
+1. Select **+ Add**.
+
+1. In **Add route**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Route name | Enter **subnet-1-to-subnet-private-endpoint**. |
+ | Destination type | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **10.1.0.0/16**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.2.1.4**. |
+
+1. Select **Add**.
+
+1. In **Settings**, select **Subnets**.
+
+1. Select **+ Associate**.
+
+1. In **Associate subnet**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Virtual network | Select **vnet-1(test-rg)**. |
+ | Subnet | Select **subnet-1**. |
+
+1. Select **OK**.
+
+## Configure an application rule in Azure Firewall
+
+Create an application rule to allow communication from **vnet-1** to the private endpoint of the Azure SQL server **sql-server-1.database.windows.net**. Replace **sql-server-1** with the name of your Azure SQL server.
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall Policies** in the search results.
+
+1. In **Firewall Policies**, select **firewall-policy**.
+
+1. In **Settings** select **Application rules**.
+
+1. Select **+ Add a rule collection**.
+
+1. In **Add a rule collection**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Name | Enter **rule-collection-sql**. |
+ | Rule collection type | Leave the selection of **Application**. |
+ | Priority | Enter **100**. |
+ | Rule collection action | Select **Allow**. |
+ | Rule collection group | Leave the default of **DefaultApplicationRuleCollectionGroup**. |
+ | **Rules** | |
+ | **Rule 1** | |
+ | Name | Enter **SQLPrivateEndpoint**. |
+ | Source type | Select **IP Address**. |
+ | Source | Enter **10.0.0.0/16** |
+ | Protocol | Enter **mssql:1433** |
+ | Destination type | Select **FQDN**. |
+ | Destination | Enter **sql-server-1.database.windows.net**. |
+
+1. Select **Add**.
+
+## Test connection to Azure SQL from virtual machine
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+1. Select **vm-1**.
+
+1. In **Operations** select **Bastion**.
+
+1. Enter the username and password for the virtual machine.
+
+1. Select **Connect**.
+
+1. To verify name resolution of the private endpoint, enter the following command in the terminal window:
+
+ ```bash
+ nslookup sql-server-1.database.windows.net
+ ```
+
+ You receive a message similar to the following example. The IP address returned is the private IP address of the private endpoint.
+
+ ```output
+ Server: 127.0.0.53
+ Address: 127.0.0.53#53
+
+ Non-authoritative answer:
+ sql-server-8675.database.windows.netcanonical name = sql-server-8675.privatelink.database.windows.net.
+ Name:sql-server-8675.privatelink.database.windows.net
+ Address: 10.1.0.4
+ ```
+
+1. Install the SQL server command line tools from [Install the SQL Server command-line tools sqlcmd and bcp on Linux](/sql/linux/sql-server-linux-setup-tools). Proceed with the next steps after the installation is complete.
+
+1. Use the following commands to connect to the SQL server you created in the previous steps.
+
+ * Replace **\<server-admin>** with the admin username you entered during the SQL server creation.
+
+ * Replace **\<admin-password>** with the admin password you entered during SQL server creation.
+
+ * Replace **sql-server-1** with the name of your SQL server.
+
+ ```bash
+ sqlcmd -S sql-server-1.database.windows.net -U '<server-admin>' -P '<admin-password>'
+ ```
+
+1. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
+
+## Validate traffic in the Azure Firewall logs
+
+1. In the search box at the top of the portal, enter **Log Analytics**. Select **Log Analytics** in the search results.
+
+1. Select your log analytics workspace. In this example, the workspace is named **log-analytics-workspace**.
+
+1. In the General settings, select **Logs**.
+
+1. In the example **Queries** in the search box, enter **Application rule**. In the returned results in **Network**, select the **Run** button for **Application rule log data**.
+
+1. In the log query output, verify **sql-server-1.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **Rule**.
++
+## Next steps
+
+Advance to the next article to learn how to use a private endpoint with Azure Private Resolver:
+> [!div class="nextstepaction"]
+> [Create a private endpoint DNS infrastructure with Azure Private Resolver for an on-premises workload](tutorial-dns-on-premises-private-resolver.md)
quotas Quotas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/quotas-overview.md
Title: Quotas overview description: Learn about to view quotas and request increases in the Azure portal. Previously updated : 07/22/2022 Last updated : 08/17/2023
Many Azure services have quotas, which are the assigned number of resources for your Azure subscription. Each quota represents a specific countable resource, such as the number of virtual machines you can create, the number of storage accounts you can use concurrently, the number of networking resources you can consume, or the number of API calls to a particular service you can make.
-The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide).
+The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings.
## Quotas or limits?
Different entry points, data views, actions, and programming options are availab
| Option | Azure portal | Quota APIs | Support API | |||||
-| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. |
+| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota Service REST API](/rest/api/quota) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. |
| Availability | All customers | All customers | All customers with unified, premier, professional direct support plans | | Which to choose? | Useful for customers desiring a central location and an efficient visual interface for viewing and managing quotas. Provides quick access to requesting quota increases. | Useful for customers who want granular and programmatic control of quota management for adjustable quotas. Intended for end to end automation of quota usage validation and quota increase requests through APIs. | Customers who want end to end automation of support request creation and management. Provides an alternative path to Azure portal for requests. | | Providers supported | All providers | Compute, Machine Learning | All providers |
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Contributor](#contributor) | Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. | b24988ac-6180-42a0-ab88-20f7382dd24c | > | [Owner](#owner) | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. | 8e3af657-a8ff-443c-a75c-2fe8c4bcb635 | > | [Reader](#reader) | View all resources, but does not allow you to make any changes. | acdd72a7-3385-48ef-bd42-f606fba81ae7 |
+> | [Role Based Access Control Administrator (Preview)](#role-based-access-control-administrator-preview) | Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy. | f58310d9-a9f6-439a-9e8d-f62e7b41a168 |
> | [User Access Administrator](#user-access-administrator) | Lets you manage user access to Azure resources. | 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 | > | **Compute** | | | > | [Classic Virtual Machine Contributor](#classic-virtual-machine-contributor) | Lets you manage classic virtual machines, but not access to them, and not the virtual network or storage account they're connected to. | d73bb868-a0df-4d4d-bd69-98a00b01fccb |
View all resources, but does not allow you to make any changes. [Learn more](rba
"type": "Microsoft.Authorization/roleDefinitions" } ```
+### Role Based Access Control Administrator (Preview)
+
+Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | Create a role assignment at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | Delete a role assignment at the specified scope. |
+> | */read | Read resources of all types, except secrets. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168",
+ "name": "f58310d9-a9f6-439a-9e8d-f62e7b41a168",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/roleAssignments/write",
+ "Microsoft.Authorization/roleAssignments/delete",
+ "*/read",
+ "Microsoft.Support/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Role Based Access Control Administrator (Preview)",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
### User Access Administrator
role-based-access-control Tutorial Role Assignments Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-group-powershell.md
-+ Last updated 02/02/2019
role-based-access-control Tutorial Role Assignments User Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-user-powershell.md
-+ Last updated 02/02/2019
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Previously updated : 08/14/2023- Last updated : 08/15/2023 # Azure Route Server support for ExpressRoute and Azure VPN
For example, in the following diagram:
You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN and ExpressRoute gateways are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other.
+If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange).
+ > [!IMPORTANT]
-> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not necessary to have BGP enabled on the VPN gateway.
+> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not a requirement to have BGP enabled on the VPN gateway to communicate with the Route Server.
-> [!IMPORTANT]
+> [!NOTE]
> When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred. ## Next steps
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Previously updated : 08/14/2023 Last updated : 08/18/2023 # Azure Route Server frequently asked questions (FAQ)
No, Azure Route Server supports only 16-bit (2 bytes) ASNs.
If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the virtual machines (VMs) in the virtual network. When a VM sends traffic to the destination of this route, the VM host uses Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
+### Does creating a Route Server affect the operation of existing virtual network gateways (VPN or ExpressRoute)?
+
+Yes. When you create or delete a Route Server in a virtual network that contains a virtual network gateway (ExpressRoute or VPN), expect downtime until the operation is complete. If you have an ExpressRoute circuit connected to the virtual network where you're creating or deleting the Route Server, the downtime doesn't affect the ExpressRoute circuit or its connections to other virtual networks.
+ ### Does Azure Route Server exchange routes by default between NVAs and the virtual network gateways (VPN or ExpressRoute)? No. By default, Azure Route Server doesn't propagate routes it receives from an NVA and a virtual network gateway to each other. The Route Server exchanges these routes after you enable **branch-to-branch** in it.
You can still use Route Server to direct traffic between subnets in different vi
No, Azure Route Server provides transit only between ExpressRoute and Site-to-Site (S2S) VPN gateway connections (when enabling the *branch-to-branch* setting).
+### Can I create an Azure Route Server in a spoke VNet that's connected to a Virtual WAN hub?
+
+No. The spoke VNet can't have a Route Server if it's connected to the virtual WAN hub.
+ ## Limitations ### How many Azure Route Servers can I create in a virtual network?
No, Azure Route Server doesn't support configuring a user defined route (UDR) on
No, Azure Route Server doesn't support network security group association to the ***RouteServerSubnet*** subnet.
-### <a name = "limitations"></a>What are Azure Route Server limits?
+### <a name = "limits"></a>What are Azure Route Server limits?
Azure Route Server has the following limits (per deployment).
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
Run the following command to deploy the control plane:
```bash
-az logout
-cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config
-cd config/WORKSPACES
- export ARM_SUBSCRIPTION_ID="<subscriptionId>" export ARM_CLIENT_ID="<appId>" export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>" export env_code="MGMT" export region_code="WEEU"
-export vnet_code="WEEU"
+export vnet_code="DEP01"
+
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
-export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-="${subscriptionId}"
-export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES"
-export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ --tenant_id "${ARM_TENANT_ID}"
```
sap Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-workload-zone.md
export region_code="<region_code>"
export vnet_code="SAP02" export deployer_environment="MGMT"
-az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
- export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
++ cd "${CONFIG_REPO_PATH}/LANDSCAPE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE" parameterFile="${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ --tenant_id "${ARM_TENANT_ID}"
``` # [Windows](#tab/windows)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Follow the guidance here [Configure Azure DevOps for SDAF](configure-devops.md)
You can run the SAP on Azure Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment.
-Clone the repository and prepare the execution environment by using the following steps on a Linux Virtual machine in Azure:
+> [!IMPORTANT]
+> Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources.
+ Ensure the Virtual Machine has the following prerequisites installed:+ - git - jq - unzip
+ - virtualenv (if running on Ubuntu)
-Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources.
--- Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment.
+You can install the prerequisites on an Ubuntu Virtual Machine by using the following command:
```bash
-mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
+sudo apt-get install -y git jq unzip virtualenv
-git clone https://github.com/Azure/sap-automation.git sap-automation
+```
-git clone https://github.com/Azure/sap-automation-samples.git samples
+You can then install the deployer components using the following commands:
-git clone https://github.com/Azure/sap-automation-bootstrap.git config
+```bash
-cd sap-automation/deploy/scripts
-
+wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/configure_deployer.sh -O configure_deployer.sh
+chmod +x ./configure_deployer.sh
./configure_deployer.sh
-```
+# Source the new variables
+. /etc/profile.d/deploy_server.sh
+
+```
-> [!TIP]
-> The deployer already clones the required repositories.
## Samples
The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample con
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config
+cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment
```
sap Get Sap Installation Media https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md
Next, set up a virtual machine (VM) where you will download the SAP components l
Next, download the SAP installation media to the VM using a script.
-1. Run the Ansible script **playbook_bom_download** with your own information. Enter the actual values **within** double quotes but **without** the triangular brackets. The Ansible command that you run should look like:
+1. Run the Ansible script **playbook_bom_download** with your own information. With the exception of the `s_password` variable, enter the actual values **within** double quotes but **without** the triangular brackets. For the `s_password` variable, use single quotes. The Ansible command that you run should look like:
```bash export bom_base_name="<Enter bom base name>" export s_user="<s-user>"
- export s_password="<password>"
+ export s_password='<password>'
export storage_account_access_key="<storageAccountAccessKey>" export sapbits_location_base_path="<containerBasePath>" export BOM_directory="<BOM_directory_path>"
sap Large Instance High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-high-availability-rhel.md
Last updated 04/19/2021
# Azure Large Instances high availability for SAP on RHEL > [!NOTE]
-> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate an SAP HANA database failover. You need to have a good understanding of Linux, SAP HANA, and Pacemaker to complete the steps in this guide.
sap Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-upgrade-hana-large-instance.md
This article describes the details of operating system (OS) upgrades on HANA Large Instances (HLI), otherwise known as BareMetal Infrastructure. > [!NOTE]
-> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
>[!NOTE] >Upgrading the OS is your responsibility. Microsoft operations support can guide you in key areas of the upgrade, but consult your operating system vendor as well when planning an upgrade.
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| Appliance Template | Date | Description | Creation Link | | | - | -- | - |
+| [**SAP S/4HANA 2022 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/983008db-db92-4d4d-ac79-7e2afa95a2e0)| July 16 2023 |This appliance contains SAP S/4HANA 2022 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=983008db-db92-4d4d-ac79-7e2afa95a2e0&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
| [**SAP S/4HANA 2022 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3722f683-42af-4059-90db-4e6a52dc9f54) | April 20 2023 |This appliance contains SAP S/4HANA 2022 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3722f683-42af-4059-90db-4e6a52dc9f54&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP S/4HANA 2022, Fully-Activated Appliance**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311) | December 15 2022 |This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
+| [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/a954cc12-da16-4caa-897e-cf84bc74cf15)| April 26 2022 |This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. |[Create Appliance](https://cal.sap.com/registration?sguid=a954cc12-da16-4caa-897e-cf84bc74cf15&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP BW/4HANA 2021 SP04 Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db)| March 23 2023 | This solution offers you an insight of SAP BW/4HANA2021 SP04. SAP BW/4HANA is the next generation Data Warehouse optimized for SAP HANA. Beside the basic BW/4HANA options the solution offers a bunch of SAP HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. | [Create Appliance](https://cal.sap.com/registration?sguid=1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/5a830213-f0cb-423e-ab5f-f7736e57f5a1)| May 10 2023 | The SAP ABAP Platform on SAP HANA gives you access to your own copy of SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements, including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=5a830213-f0cb-423e-ab5f-f7736e57f5a1&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP Focused Run 4.0 FP01, unconfigured**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/2afd7a3e-ecf4-4a20-a975-ce05c4360e55) | June 29 2023 | SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics.| [Create Appliance](https://cal.sap.com/registration?sguid=2afd7a3e-ecf4-4a20-a975-ce05c4360e55&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
sap Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md
IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
The caching recommendations for Azure premium disks below are assuming the I/O c
**Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for the different volumes using Azure premium storage should be set like:** -- **/hana/data** - no caching or read caching-- **/hana/log** - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be enabled
+- **/hana/data** - None or read caching
+- **/hana/log** - None. Enable Write Accelerator for M- and Mv2-Series VMs, the option in the Azure portal is "None + Write Accelerator."
- **/hana/shared** - read caching - **OS disk** - don't change default caching that is set by Azure at creation time of the VM
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
IBM Db2 for Linux, UNIX, and Windows (LUW) in [high availability and disaster recovery (HADR) configuration](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c0011267.html) consists of one node that runs a primary database instance and at least one node that runs a secondary database instance. Changes to the primary database instance are replicated to a secondary database instance synchronously or asynchronously, depending on your configuration. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration.
sap High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs.md
This article describes how to deploy the virtual machines, configure the virtual
This guide describes how to set up a highly available NFS server that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the example assume that you have used the [SAP file server template][template-file-server] with resource prefix **prod**. > [!NOTE]
-> This article contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article.
Read the following SAP Notes and papers first
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Select an area for resources about how to integrate SAP and Azure in that space.
| [Microsoft Teams](#microsoft-teams) | Discover collaboration scenarios boosting your daily productivity by interacting with your SAP applications directly from Microsoft Teams. | | [Microsoft Power Platform](#microsoft-power-platform) | Learn about the available [out-of-the-box SAP applications](/power-automate/sap-integration/solutions) enabling your business users to achieve more with less. | | [SAP Fiori](#sap-fiori) | Increase performance and security of your SAP Fiori applications by integrating them with Azure services. |
-| [Azure Active Directory (Azure AD)](#azure-ad) | Ensure end-to-end SAP user authentication and authorization with Azure Active Directory. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. |
-| [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure AI services and more. |
+| [Microsoft Entra ID (formerly Azure Active Directory)](#microsoft-entra-id-formerly-azure-ad) | Ensure end-to-end SAP user authentication and authorization with Microsoft Entra ID. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. |
+| [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure Cognitive Services and more. |
| [App Development in any language including ABAP and DevOps](#app-development-in-any-language-including-abap-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | | [Azure Data Services](#azure-data-services) | Learn how to integrate your SAP data with Data Services like Azure Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose, tune performance, efficiently troubleshoot, and more. |
-| [Threat Monitoring with Microsoft Sentinel for SAP](#microsoft-sentinel) | Learn how to best secure your SAP workload with Microsoft Sentinel, prevent incidents from happening and detect and respond to threats in real-time with this [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution. |
+| [Threat Monitoring and Response Automation with Microsoft Security Services for SAP](#microsoft-security-for-sap) | Learn how to best secure your SAP workload with Microsoft Defender for Cloud and the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution. Prevent incidents from happening, detect and respond to threats in real-time. |
| [SAP Business Technology Platform (BTP)](#sap-btp) | Discover integration scenarios like SAP Private Link to securely and efficiently connect your BTP apps to your Azure workloads. | ### Azure OpenAI service
For more information about integration with [Azure OpenAI service](/azure/ai-ser
Also see these SAP resources: -- [empower SAP RISE enterprise users with Azure OpenAI in multi-cloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/)
+- [empower SAP RISE enterprise users with Azure OpenAI in multicloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/)
- [Consume OpenAI services (GPT) through CAP & SAP BTP, AI Core](https://github.com/SAP-samples/azure-openai-aicore-cap-api) - [SAP SuccessFactors Helps HR Solve Skills Gap with Generative AI | SAP News](https://news.sap.com/2023/05/sap-successfactors-helps-hr-solve-skills-gap-with-generative-ai/)
Also see the following SAP resources:
- [Azure CDN for SAPUI5 libraries](https://blogs.sap.com/2021/03/22/sap-fiori-using-azure-cdn-for-sapui5-libraries/) - [Web Application Firewall Setup for Internet facing SAP Fiori Apps](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
-### Azure AD
+### Microsoft Entra ID (formerly Azure AD)
For more information about integration with Azure AD, see the following Azure documentation:
For more information about using SAP with Azure Integration services, see the fo
- [Connect to SAP from workflows in Azure Logic Apps](../../logic-apps/logic-apps-using-sap-connector.md) - [Import SAP OData metadata as an API into Azure API Management](../../api-management/sap-api.md) - [Apply SAP Principal Propagation to your Azure hosted APIs](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml)
+- [Using Logic Apps (Standard) to connect with SAP BAPIs and RFC](https://www.youtube.com/watch?v=ZmOPPtIYYM4)
Also see the following SAP resources:
For more information about integrating SAP with Microsoft services natively, see
- [Use community-driven OData SDKs with Azure Functions](https://github.com/Azure/azure-sdk-for-sap-odata) Also see the following SAP resources: -- [SAP BTP ABAP Environment (aka. Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)-- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (aka. Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)
+- [SAP BTP ABAP Environment (also known as Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)
+- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (also known as Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)
- [dotNET speaks OData too, how to implement Azure App Service with SAP Gateway](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) - [Apply cloud native deployment practice blue-green to SAP BTP apps with Azure DevOps](https://blogs.sap.com/2019/12/20/go-blue-green-for-your-cloud-foundry-app-from-webide-with-azure-devops/)
Also see the following SAP resources:
- [Integrate SAP Data Warehouse Cloud with Power BI and Azure Synapse Analytics](https://blogs.sap.com/2022/07/27/your-sap-on-azure-part-28-integrate-sap-data-warehouse-cloud-with-powerbi-and-azure-synapse/) - [Extend SAP Integrated Business Planning forecasting algorithms with Azure Machine Learning](https://blogs.sap.com/2022/10/03/microsoft-azure-machine-learning-for-supply-chain-planning/)
-### Microsoft Sentinel
+### Microsoft Security for SAP
+
+Protect your data, apps, and infrastructure against rapidly evolving cyber threats with cloud security services from Microsoft. Artificial intelligence (AI) and device learning (ML) backed capabilities are required to keep up with the pace.
+
+Use [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) to secure your cloud-infrastructure surrounding the SAP system including automated responses.
+
+Complimenting that, use the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution [Microsoft Sentinel](../../sentinel/sap/sap-solution-security-content.md) to protect your SAP system from within using signals from the SAP Audit Log among others.
+
+Learn more about identity focused integration capabilities that power the analysis on Defender and Sentinel via the [Microsoft Entra ID section](#microsoft-entra-id-formerly-azure-ad).
+
+#### Microsoft Defender for Cloud
+
+The [Defender product family](../../defender-for-cloud/defender-for-cloud-introduction.md) consist of multiple products tailored to provide "cloud security posture management" (CSPM) and "cloud workload protection" (CWPP) for the various workload types. Below excerpt serves as entry point to start securing your SAP system.
+
+- Defender for Servers (SAP hosts)
+ - [Protect your SAP hosts with Defender](../../defender-for-cloud/defender-for-servers-introduction.md) including OS specific Endpoint protection with Microsoft Defender for Endpoint (MDE)
+ - [Microsoft Defender for Endpoint on Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux)
+ - [Microsoft Defender for Endpoint on Windows](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint)
+ - [Enable Defender for Servers](../../defender-for-cloud/tutorial-enable-servers-plan.md#enable-the-defender-for-servers-plan)
+- Defender for Storage (SAP SMB file shares on Azure)
+ - [Protect your SAP SMB file shares with Defender](../../defender-for-cloud/defender-for-storage-introduction.md)
+ - [Enable Defender for Storage](../../defender-for-cloud/tutorial-enable-storage-plan.md)
+- Defender for APIs (SAP Gateway, SAP Business Technology Platform, SAP SaaS)
+ - [Protect your OpenAPI APIs with Defender for APIs](../../defender-for-cloud/defender-for-apis-introduction.md)
+ - [Enable the Defender for APIs](../../defender-for-cloud/defender-for-apis-deploy.md)
+
+See SAP's recommendation to use AntiVirus software for SAP hosts and systems on both Linux and Windows based platforms [here](https://wiki.scn.sap.com/wiki/display/Basis/Protecting+SAP+systems+using+antivirus+softwares). Be aware that the threat landscape has evolved from file-based attacks to file-less attacks. Therefore, the protection approach has to evolve beyond pure AntiVirus capabilities too.
+
+For more information about using Microsoft Defender for Endpoint (MDE) via Microsoft Defender for Server for SAP applications regarding `Next-generation protection` (AntiVirus) and `Endpoint Detection and Response` (EDR) see the following Microsoft resources:
+
+- [SAP Applications and Microsoft Defender for Linux | Microsoft TechCommunity](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-applications-and-microsoft-defender-for-linux/ba-p/3675480)
+- [Enable the Microsoft Defender for Endpoint integration](../../defender-for-cloud/integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration)
+- [Common mistakes to avoid when defining exclusions](/microsoft-365/security/defender-endpoint/common-exclusion-mistakes-microsoft-defender-antivirus)
+
+Also see the following SAP resources:
+
+- [2808515 - Installing security software on SAP servers running on Linux](https://me.sap.com/notes/2808515)
+- [1730997 - Unrecommended versions of antivirus software](https://me.sap.com/notes/1730997)
+
+> [!Note]
+> It is **not recommended** to exclude files, paths or processes from EDR because it creates blind spots for Defender. If exclusions are required nevertheless, open a support case with Microsoft Support via the Defender365 Portal specifying executables and/or paths to exclude. Follow the same process for tuning of real-time scans.
+
+> [!Note]
+> Certification for the SAP Virus Scan Interface (NW-VSI) doesn't apply to MDE, because it operates outside of the SAP system. It complements Microsoft Sentinel for SAP, which interacts with the SAP system directly. See more details and the SAP certification note for Sentinel below.
+
+> [!Tip]
+> MDE was formerly called Microsoft Defender Advanced Threat Protection (ATP). Older articles or SAP notes still refer to that name.
+
+> [!Tip]
+> Microsoft Defender for Server includes Endpoint detection and response (EDR) features that are provided by Microsoft Defender for Endpoint Plan 2.
+
+#### Microsoft Sentinel for SAP
For more information about [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) threat monitoring with Microsoft Sentinel for SAP, see the following Microsoft resources:
For more information about Azure integration with SAP Business Technology Platfo
- [Route Multi-Region Traffic to SAP BTP Services Intelligently with Azure Traffic Manager](https://discovery-center.cloud.sap/missiondetail/3603/) - [Distributed Resiliency of SAP CAP applications using SAP HANA Cloud with Azure Traffic Manager](https://blogs.sap.com/2022/11/12/distributed-resiliency-of-sap-cap-applications-using-sap-hana-cloud-multi-zone-replication-with-azure-traffic-manager/) - [Federate your data from Azure Data Explorer to SAP Data Warehouse Cloud](https://discovery-center.cloud.sap/missiondetail/3433/3473/)-- [Integrate globally available SAP BTP apps with Azure CosmosDB via OData](https://blogs.sap.com/2021/06/11/sap-where-can-i-get-toilet-paper-an-implementation-of-the-geodes-pattern-with-s4-btp-and-azure-cosmosdb/)
+- [Integrate globally available SAP BTP apps with Azure Cosmos DB via OData](https://blogs.sap.com/2021/06/11/sap-where-can-i-get-toilet-paper-an-implementation-of-the-geodes-pattern-with-s4-btp-and-azure-cosmosdb/)
- [Explore your Azure data sources with SAP Data Warehouse Cloud](https://discovery-center.cloud.sap/missiondetail/3656/3699/) - [Building Applications on SAP BTP with Microsoft Services | OpenSAP course](https://open.sap.com/courses/btpma1)
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
In this example each cluster node has its own HANA NFS filesystems /hana/shared,
Check the status of cluster and all the resources > [!NOTE]
- > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+ > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
```bash sudo pcs status
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
Read the following SAP Notes and papers first:
- [Azure Virtual Machines planning and implementation for SAP on Linux](./planning-guide.md) >[!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
## Overview
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
Create the HANA topology. Run the following commands on one of the Pacemaker clu
Next, create the HANA resources. > [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
If building a cluster on **RHEL 7.x**, use the following commands:
Resource Group: g_ip_HN1_03
### Test the Azure fencing agent > [!NOTE]
-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
Resource state before starting the test:
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
Now you're ready to create the cluster resources:
1. Create the HANA instance resource. > [!NOTE]
- > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we’ll remove it from this article.
+ > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
If you're building a RHEL **7.x** cluster, use the following commands: ```bash
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va
3. Next, create the HANA instance resource. > [!NOTE]
- > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+ > This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
```bash sudo crm configure primitive rsc_SAPHana_HN1_HDB03 ocf:suse:SAPHanaController \
You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va
## Test SAP HANA failover > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
1. Before you start a test, check the cluster and SAP HANA system replication status.
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
Next, create the HANA resources:
> For existing Pacemaker clusters, if your configuration was already changed to use `socat` as described in [Azure Load Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), you don't need to immediately switch to the `azure-lb` resource agent. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article.
```bash # Replace <placeholders> with your instance number, HANA system ID, and the front-end IP address of the Azure load balancer.
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
This article describes how to deploy a highly available SAP HANA system in a sca
In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6. > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
Before you begin, refer to the following SAP notes and papers:
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
In this example for deploying SAP HANA in scale-out configuration with standby n
## Test SAP HANA failover > [!NOTE]
-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
+> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article.
1. Simulate a node crash on an SAP HANA worker node. Do the following:
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
The data source definition specifies the data to index, credentials, and policie
### Supported credentials and connection strings
-Indexers can connect to a collection using the following connections. For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit "ApiKind" from the connection string.
+Indexers can connect to a collection using the following connections.
Avoid port numbers in the endpoint URL. If you include the port number, the connection will fail.
Avoid port numbers in the endpoint URL. If you include the port number, the conn
| Managed identity connection string | ||
-|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information.|
+|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)/(IdentityAuthType=[identity-auth-type])" }`|
+|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md). For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit `ApiKind` from the connection string. For more information about `ApiKind`, `IdentityAuthType` see [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).|
<a name="flatten-structures"></a>
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
You can use a system-assigned managed identity or a user-assigned managed identi
* [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Azure Cosmos DB.
+* Assign the **Cosmos DB Account Reader** role to the search service managed identity. This role grants the ability to read Azure Cosmos DB account data. For more information about role assignments in Cosmos DB, see [Configure role-based access control to data](search-howto-managed-identities-data-sources.md#assign-a-role).
+
+* Data Plane Role assignment: Follow [Data plane Role assignment](../cosmos-db/how-to-setup-rbac.md)
+to know more.
+
+* Example for a read-only data plane role assignment:
+```azurepowershell
+$cosmosdb_acc_name = <cosmos db account name>
+$resource_group = <resource group name>
+$subsciption = <subscription id>
+$system_assigned_principal = <principal id for system assigned identity>
+$readOnlyRoleDefinitionId = "00000000-0000-0000-0000-000000000001"
+$scope=$(az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup --query id --output tsv)
+```
+
+Role assignment for system-assigned identity:
- For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Azure Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role.
+```azurepowershell
+az cosmosdb sql role assignment create --account-name $cosmosdbname --resource-group $resourcegroup --role-definition-id $readOnlyRoleDefinitionId --principal-id $sys_principal --scope $scope
+```
+* For Cosmos DB for NoSQL, you can optionally [Enforcing RBAC as the only authentication method](../cosmos-db/how-to-setup-rbac.md#disable-local-auth)
+for data connections by setting `disableLocalAuth` to `true` for your Cosmos DB account.
- At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Azure Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Azure Cosmos DB.
+* *For Gremlin and MongoDB Collections*:
+ Indexer support is currently in preview. At this time, a preview limitation exists that requires Cognitive Search to connect using keys. You can still set up a managed identity and role assignment, but Cognitive Search will only use the role assignment to get keys for the connection. This limitation means that you can't configure an [RBAC-only approach](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) if your indexers are connecting to Gremlin or MongoDB using Search with managed identities to connect to Azure Cosmos DB.
* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-index-cosmosdb.md).
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th
When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind".
+* For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections.
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string and use a preview REST API. * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string and use a preview REST API.
api-key: [Search service admin key]
"name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {
- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];"
+ "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]"
}, "container": { "name": "[my-cosmos-collection]", "query": null }, "dataChangeDetectionPolicy": null
The 2021-04-30-preview REST API supports connections based on a user-assigned ma
* First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind".
+ * For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections.
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string.
api-key: [Search service admin key]
"name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {
- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];"
+ "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]"
}, "container": { "name": "[my-cosmos-collection]", "query": null
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 01/16/2023 Last updated : 08/14/2023 # Return a semantic answer in Azure Cognitive Search
The "semanticConfiguration" parameter is required. It's defined in a search inde
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details.
++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Previously updated : 7/14/2023 Last updated : 8/15/2023 # Configure semantic ranking and return captions in search results
The following example in this section uses the [hotels-sample-index](search-get-
1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't.
- Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
+ Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
+
+ For semantic captions, the fields referenced in the "semanticConfiguration" must have a word limit in the range of 2000-3000 words (or equivalent to 10000 tokens), otherwise, it will miss important caption results. If you anticipate that the fields used by the "semanticConfiguration" word count could be higher than the exposed limit and you need to use captions, consider [Text split cognitive skill]cognitive-search-skill-textsplit.md) as part of your [AI enrichment pipeline](cognitive-search-concept-intro.md) while indexing your data with [built-in pull indexers](search-indexer-overview.md).
1. Set "highlightPreTag" and "highlightPostTag" if you want to override the default highlight formatting that's applied to captions.
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
Previously updated : 07/14/2023 Last updated : 08/14/2023 # Semantic ranking in Azure Cognitive Search
Each document is now represented by a single long string.
> [!NOTE] > In the 2020-06-30-preview, the "searchFields" parameter is used rather than the semantic configuration to determine which fields to use. We recommend upgrading to the 2021-04-30-preview API version for best results.
-The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens are roughly equivalent to a string that is 128 words in length.
+The string is composed of tokens, not characters or words. The maximum token count is 256 unique tokens. For estimation purposes, you can assume that 256 tokens are roughly equivalent to a string that is 256 words in length.
> [!NOTE] > Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from "searchFields". For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognit
+ Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
-+ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-query.md).
++ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-create-index.md). + Use REST API version **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal.
security Ransomware Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection.md
This article lays out key Azure native capabilities and defenses for ransomware
## A growing threat
-Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can cripple a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
+Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can disable a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
Recent trends on the number of attacks are quite alarming. While 2020 wasn't a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states.
sentinel Create Nrt Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md
You create NRT rules the same way you create regular [scheduled-query analytics
The configuration of NRT rules is in most ways the same as that of scheduled analytics rules.
- - You can refer to [**watchlists**](watchlists.md) in your query logic.
+ - You can refer to multiple tables and [**watchlists**](watchlists.md) in your query logic.
- You can use all of the alert enrichment methods: [**entity mapping**](map-data-fields-to-entities.md), [**custom details**](surface-custom-details-in-alerts.md), and [**alert details**](customize-alert-details.md).
You create NRT rules the same way you create regular [scheduled-query analytics
In addition, the query itself has the following requirements:
- - The query itself can refer to only one table, and cannot contain unions or joins.
- - You can't run the query across workspaces. - Due to the size limits of the alerts, your query should make use of `project` statements to include only the necessary fields from your table. Otherwise, the information you want to surface could end up being truncated.
sentinel Monitor Analytics Rule Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-analytics-rule-integrity.md
For either **Scheduled analytics rule run** or **NRT analytics rule run**, you m
| An internal server error occurred while running the query. | | | The query execution timed out. | | | A table referenced in the query was not found. | Verify that the relevant data source is connected. |
- | A semantic error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | A semantic error occurred while running the query. | Try resetting the analytics rule by editing and saving it (without changing any settings). |
| A function called by the query is named with a reserved word. | Remove or rename the function. |
- | A syntax error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | A syntax error occurred while running the query. | Try resetting the analytics rule by editing and saving it (without changing any settings). |
| The workspace does not exist. | |
- | This query was found to use too many system resources and was prevented from running. | |
+ | This query was found to use too many system resources and was prevented from running. | Review and tune the analytics rule. Consult our Kusto Query Language [overview](kusto-overview.md) and [best practices](/azure/data-explorer/kusto/query/best-practices?toc=%2Fazure%2Fsentinel%2FTOC.json&bc=%2Fazure%2Fsentinel%2Fbreadcrumb%2Ftoc.json) documentation. |
| A function called by the query was not found. | Verify the existence in your workspace of all functions called by the query. | | The workspace used in the query was not found. | Verify that all workspaces in the query exist. |
- | You don't have permissions to run this query. | Try resetting the alert rule by editing and saving it (without changing any settings). |
+ | You don't have permissions to run this query. | Try resetting the analytics rule by editing and saving it (without changing any settings). |
| You don't have access permissions to one or more of the resources in the query. | | | The query referred to a storage path that was not found. | | | The query was denied access to a storage path. | |
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
The following limitations currently govern the use of NRT rules:
(Since the NRT rule type is supposed to approximate **real-time** data ingestion, it doesn't afford you any advantage to use NRT rules on log sources with significant ingestion delay, even if it's far less than 12 hours.)
-1. As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
-
- 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists.
-
- 1. You cannot use unions or joins.
+1. The syntax for this type of rule is gradually evolving. At this time the following limitations remain in effect:
1. Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes).
The following limitations currently govern the use of NRT rules:
1. Event grouping is now configurable to a limited degree. NRT rules can produce up to 30 single-event alerts. A rule with a query that results in more than 30 events will produce alerts for the first 29, then a 30th alert that summarizes all the applicable events.
+ 1. Queries defined in an NRT rule can now reference **more than one table**.
+ ## Next steps In this document, you learned how near-real-time (NRT) analytics rules work in Microsoft Sentinel.
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
To use the Service Bus Explorer, navigate to the Service Bus namespace on which
1. If you're looking to run operations against a queue, select **Queues** from the navigation menu. If you're looking to run operations against a topic (and it's related subscriptions), select **Topics**. :::image type="content" source="./media/service-bus-explorer/queue-topics-left-navigation.png" alt-text="Screenshot of left side navigation, where entity can be selected." lightbox="./media/service-bus-explorer/queue-topics-left-navigation.png":::- 1. After selecting **Queues** or **Topics**, select the specific queue or topic.+
+ :::image type="content" source="./media/service-bus-explorer/select-specific-queue.png" alt-text="Screenshot of the Queues page with a specific queue selected." lightbox="./media/service-bus-explorer/select-specific-queue.png":::
1. Select the **Service Bus Explorer** from the left navigation menu :::image type="content" source="./media/service-bus-explorer/left-navigation-menu-selected.png" alt-text="Screenshot of queue page where Service Bus Explorer can be selected." lightbox="./media/service-bus-explorer/left-navigation-menu-selected.png":::
After peeking or receiving a message, we can resend it, which will send a copy o
When working with Service Bus Explorer, it's possible to use either **Access Key** or **Azure Active Directory** authentication. 1. Select the **Settings** button.+
+ :::image type="content" source="./media/service-bus-explorer/select-settings.png" alt-text="Screenshot indicating the Settings button in Service Bus Explorer." lightbox="./media/service-bus-explorer/select-settings.png":::
1. Choose the desired authentication method, and select the **Save** button. :::image type="content" source="./media/service-bus-explorer/queue-select-authentication-type.png" alt-text="Screenshot indicating the Settings button and a page showing the different authentication types." lightbox="./media/service-bus-explorer/queue-select-authentication-type.png":::
service-bus-messaging Service Bus Amqp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-overview.md
Title: Overview of AMQP 1.0 in Azure Service Bus description: Learn how Azure Service Bus supports Advanced Message Queuing Protocol (AMQP), an open standard protocol. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Advanced Message Queueing Protocol (AMQP) 1.0 support in Service Bus
service-bus-messaging Service Bus Amqp Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-troubleshoot.md
Title: Troubleshoot AMQP errors in Azure Service Bus | Microsoft Docs description: Provides a list of AMQP errors you may receive when using Azure Service Bus, and cause of those errors. Previously updated : 09/20/2021 Last updated : 08/16/2023 # AMQP errors in Azure Service Bus
-This article provides some of the errors you receive when using AMQP with Azure Service Bus. They are all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
+This article provides some of the errors you receive when using AMQP with Azure Service Bus. They're all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
## Link is closed You see the following error when the AMQP connection and link are active but no calls (for example, send or receive) are made using the link for 10 minutes. So, the link is closed. The connection is still open.
amqp:link:detach-forced:The link 'G2:7223832:user.tenant0.cud_00000000000-0000-0
``` ## Connection is closed
-You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link has not been created in 5 minutes.
+You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes.
``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null} ```
-## Link is not created
-You see this error when a new AMQP connection is created but a link is not created within 1 minute of the creation of the AMQP Connection.
+## Link isn't created
+You see this error when a new AMQP connection is created but a link isn't created within 1 minute of the creation of the AMQP Connection.
``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:0000000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T18:41:51', info=null}
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
Title: Azure Service Bus Subscription Rule SQL Filter syntax | Microsoft Docs description: This article provides details about SQL filter grammar. A SQL filter supports a subset of the SQL-92 standard. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Subscription Rule SQL Filter Syntax
-A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown below.
+A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown in this section.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html) through the JMS 2.0 API.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://
## Remarks
-An attempt to access a non-existent system property is an error, while an attempt to access a non-existent user property isn't an error. Instead, a non-existent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation.
+An attempt to access a nonexistent system property is an error, while an attempt to access a nonexistent user property isn't an error. Instead, a nonexistent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation.
## property_name
Consider the following Sql Filter semantics:
### Property evaluation semantics -- An attempt to evaluate a non-existent system property throws a `FilterException` exception.
+- An attempt to evaluate a nonexistent system property throws a `FilterException` exception.
- A property that doesn't exist is internally evaluated as **unknown**.
service-bus-messaging Service Bus Migrate Standard Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md
Title: Migrate Azure Service Bus namespaces - standard to premium
description: Guide to allow migration of existing Azure Service Bus standard namespaces to premium Previously updated : 06/27/2022 Last updated : 08/17/2023 # Migrate existing Azure Service Bus standard namespaces to the premium tier
Previously, Azure Service Bus offered namespaces only on the standard tier. Name
This article describes how to migrate existing standard tier namespaces to the premium tier. >[!WARNING]
-> Migration is intended for Service Bus standard namespaces to be upgraded to the premium tier. The migration tool does not support downgrading.
+> Migration is intended for Service Bus standard namespaces to be upgraded to the premium tier. The migration tool doesn't support downgrading.
Some of the points to note: - This migration is meant to happen in place, meaning that existing sender and receiver applications **don't require any changes to code or configuration**. The existing connection string will automatically point to the new premium namespace.-- The **premium** namespace should have **no entities** in it for the migration to succeed.
+- If you're using an existing premium name, the **premium** namespace should have **no entities** in it for the migration to succeed.
- All **entities** in the standard namespace are **copied** to the premium namespace during the migration process. - Migration supports **1,000 entities per messaging unit** on the premium tier. To identify how many messaging units you need, start with the number of entities that you have on your current standard namespace. - You can't directly migrate from **basic tier** to **premium tier**, but you can do so indirectly by migrating from basic to standard first and then from the standard to premium in the next step.-- The role-based access control (RBAC) settings are not migrated, so you will need to add them manually after the migration.
+- The role-based access control (RBAC) settings aren't migrated, so you'll need to add them manually after the migration.
## Migration steps Some conditions are associated with the migration process. Familiarize yourself with the following steps to reduce the possibility of errors. These steps outline the migration process, and the step-by-step details are listed in the sections that follow.
-1. Create a new premium namespace.
+1. Create a new premium namespace. You complete the next three steps using the following CLI or Azure portal instructions in this article.
1. Pair the standard and premium namespaces to each other. 1. Sync (copy-over) entities from the standard to the premium namespace. 1. Commit the migration.
To migrate your Service Bus standard namespace to premium by using the Azure CLI
1. Create a new Service Bus premium namespace. You can reference the [Azure Resource Manager templates](service-bus-resource-manager-namespace.md) or [use the Azure portal](service-bus-quickstart-portal.md#create-a-namespace-in-the-azure-portal). Be sure to select **premium** for the **serviceBusSku** parameter.
-1. Set the following environment variables to simplify the migration commands.
+1. Set the following environment variables to simplify the migration commands. You can get the Azure Resource Manager ID for your premium namespace by navigating to the namespace in the Azure portal and copying the portion of the URL that looks like the following sample: `/subscriptions/00000000-0000-0000-0000-00000000000000/resourceGroups/contosoresourcegroup/providers/Microsoft.ServiceBus/namespaces/contosopremiumnamespace`.
``` resourceGroup = <resource group for the standard namespace>
To migrate your Service Bus standard namespace to premium by using the Azure CLI
Migration by using the Azure portal has the same logical flow as migrating by using the commands. Follow these steps to migrate by using the Azure portal.
-1. On the **Navigation** menu in the left pane, select **Migrate to premium**. Click the **Get Started** button to continue to the next page.
+1. On the **Navigation** menu in the left pane, select **Migrate to premium**. Select the **Get Started** button to continue to the next page.
:::image type="content" source="./media/service-bus-standard-premium-migration/migrate-premium-page.png" alt-text="Image showing the Migrate to premium page."::: 1. You see the following **Setup Namespaces** page.
Migration by using the Azure portal has the same logical flow as migrating by us
## Caveats
-Some of the features provided by Azure Service Bus Standard tier are not supported by Azure Service Bus Premium tier. These are by design since the premium tier offers dedicated resources for predictable throughput and latency.
+Some of the features provided by Azure Service Bus Standard tier aren't supported by Azure Service Bus Premium tier. These are by design since the premium tier offers dedicated resources for predictable throughput and latency.
-Here is a list of features not supported by Premium and their mitigation -
+Here's a list of features not supported by Premium and their mitigation -
### Express entities
-Express entities that don't commit any message data to storage are not supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system.
+Express entities that don't commit any message data to storage aren't supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system.
During migration, any of your express entities in your Standard namespace will be created on the Premium namespace as a non-express entity.
-If you utilize Azure Resource Manager (ARM) templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
+If you utilize Azure Resource Manager templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors.
### RBAC settings The role-based access control (RBAC) settings on the namespace aren't migrated to the premium namespace. You'll need to add them manually after the migration.
The role-based access control (RBAC) settings on the namespace aren't migrated t
After the migration is committed, the connection string that pointed to the standard namespace will point to the premium namespace.
-The sender and receiver applications will disconnect from the standard Namespace and reconnect to the premium namespace automatically.
+The sender and receiver applications will disconnect from the standard namespace and reconnect to the premium namespace automatically.
-If your are using the ARM Id for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the ARM Id to be that of the Premium namespace.
+If you are using the Azure Resource Manager ID for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the Azure Resource Manager ID to be that of the premium namespace.
### What do I do after the standard to premium migration is complete? The standard to premium migration ensures that the entity metadata such as topics, subscriptions, and filters are copied from the standard namespace to the premium namespace. The message data that was committed to the standard namespace isn't copied from the standard namespace to the premium namespace.
-The standard namespace may have some messages that were sent and committed while the migration was underway. Manually drain these messages from the standard Namespace and manually send them to the premium Namespace. To manually drain the messages, use a console app or a script that drains the standard namespace entities by using the Post Migration DNS name that you specified in the migration commands. Send these messages to the premium namespace so that they can be processed by the receivers.
+The standard namespace may have some messages that were sent and committed while the migration was underway. Manually drain these messages from the standard namespace and manually send them to the premium namespace. To manually drain the messages, use a console app or a script that drains the standard namespace entities by using the post-migration DNS name that you specified in the migration commands. Send these messages to the premium namespace so that they can be processed by the receivers.
After the messages have been drained, delete the standard namespace. >[!IMPORTANT]
-> After the messages from the standard namespace have been drained, delete the standard namespace. This is important because the connection string that initially referred to the standard namespace now refers to the premium namespace. You won't need the standard Namespace anymore. Deleting the standard namespace that you migrated helps reduce later confusion.
+> After the messages from the standard namespace have been drained, delete the standard namespace. This is important because the connection string that initially referred to the standard namespace now refers to the premium namespace. You won't need the standard namespace anymore. Deleting the standard namespace that you migrated helps reduce later confusion.
### How much downtime do I expect?
During migration, the actual message data/payload isn't copied from the standard
However, if you can migrate during a planned maintenance/housekeeping window, and you don't want to manually drain and send the messages, follow these steps: 1. Stop the sender applications. The receiver applications will process the messages that are currently in the standard namespace and will drain the queue.
-1. After the queues and subscriptions in the standard Namespace are empty, follow the procedure that is described earlier to execute the migration from the standard to the premium namespace.
+1. After the queues and subscriptions in the standard namespace are empty, follow the procedure that is described earlier to execute the migration from the standard to the premium namespace.
1. After the migration is complete, you can restart the sender applications. 1. The senders and receivers will now automatically connect with the premium namespace.
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-networking.md
The following steps describe enable public IP on your node.
```json {
- "name": "Secondary Node Type",
+ "name": "<secondary_node_type_name>",
"apiVersion": "2023-02-01-preview", "properties": { "isPrimary" : false,
- "vmImageResourceId": "/subscriptions/<SubscriptionID>/resourceGroups/<myRG>/providers/Microsoft.Compute/images/<MyCustomImage>",
+ "vmImageResourceId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Compute/images/<your_custom_image>",
"vmSize": "Standard_D2", "vmInstanceCount": 5, "dataDiskSizeGB": 100,
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_0>", "ipConfiguration": { "id": "<configuration_id_0>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>", "provisioningState": "Succeeded", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static",
- "resourceGroup": "<your_resource_group",
+ "resourceGroup": "<your_resource_group>",
"resourceGuid": "resource_guid_0", "sku": { "name": "Standard"
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_1>", "ipConfiguration": { "id": "<configuration_id_1>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>",
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_2>", "ipConfiguration": { "id": "<configuration_id_2>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>",
service-fabric Managed Cluster Deny Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/managed-cluster-deny-assignment.md
+
+ Title: Deny assignment policy for Service Fabric managed clusters
+description: An overview of the deny assignment policy for Service Fabric managed clusters.
+++++ Last updated : 08/18/2023++
+# Deny assignment policy for Service Fabric managed clusters
+
+Deny assignment policies for Service Fabric managed clusters enable customers to protect the resources of their clusters. Deny assignments attach a set of deny actions to a user, group, or service principal at a particular scope to deny access. Limiting access to certain actions can help users from inadvertently damaging their clusters when they delete, deallocate restart, or reimage their clusters' scale set directly in the infrastructure resource group, which can cause the resources of the cluster to be unsynchronized with the data in the managed cluster.
+
+All actions that are related to managed clusters should be done through the managed cluster resource APIs instead of directly against the infrastructure resource group. Using the resource APIs ensures the resources of the cluster are synchronized with the data in the managed cluster.
+
+This feature ensures that the correct, supported APIs are used when performing delete operations to avoid any errors.
+
+You can learn more about deny assignments in the [Azure role-based access control (RBAC) documentation](../role-based-access-control/deny-assignments.md).
+
+## Best practices
+
+The following are some best practices to minimize the threat of desyncing your cluster's resources:
+* Instead of deleting virtual machine scale sets directly from the managed resource group, use NodeType level APIs to delete the NodeType or virtual machine scale set. Options include the Node blade on the Azure portal and [Azure PowerShell](/powershell/module/az.servicefabric/remove-azservicefabricmanagednodetype).
+* Use the correct APIs to restart or reimage your scale sets:
+ * [Virtual machine scale set restarts](/powershell/module/az.servicefabric/restart-azservicefabricmanagednodetype)
+ * [Virtual machine scale set reimage](/powershell/module/az.servicefabric/set-azservicefabricmanagednodetype)
+
+## Next steps
+
+* Learn more about [granting permission to access resources on managed clusters](how-to-managed-cluster-grant-access-other-resources.md)
service-fabric Service Fabric Cluster Creation Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-create-template.md
+ Last updated 07/14/2022
site-recovery Avs Tutorial Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md
Previously updated : 08/23/2022 Last updated : 08/18/2023
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
Title: Set up Hyper-V disaster recovery by using Azure Site Recovery
-description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery.
+description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery and MARS.
Last updated 05/04/2023
It's important to prepare the infrastructure before you set up disaster recovery
### Source settings
-To set up the source environment, you create a Hyper-V site. You add to the site the Hyper-V hosts that contain VMs you want to replicate. Then, you download and install the Azure Site Recovery provider and the Azure Recovery Services agent on each host, and register the Hyper-V site in the vault.
+To set up the source environment, you create a Hyper-V site. You add to the site the Hyper-V hosts that contain VMs you want to replicate. Then, you download and install the Azure Site Recovery provider and the Microsoft Azure Recovery Services (MARS) agent for Azure Site Recovery on each host, and register the Hyper-V site in the vault.
1. On **Prepare infrastructure**, on the **Source settings** tab, complete these steps: 1. For **Are you Using System Center VMM to manage Hyper-V hosts?**, select **No**.
Site Recovery checks for compatible Azure storage accounts and networks in your
#### Install the provider
-Install the downloaded setup file (*AzureSiteRecoveryProvider.exe*) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Site Recovery provider and the Recovery Services agent on each Hyper-V host.
+Install the downloaded setup file (*AzureSiteRecoveryProvider.exe*) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Site Recovery provider and the Recovery Services agent (MARS for Azure Site Recovery) on each Hyper-V host.
1. Run the setup file. 1. In the Azure Site Recovery provider setup wizard, for **Microsoft Update**, opt in to use Microsoft Update to check for provider updates.
site-recovery Service Updates How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/service-updates-how-to.md
Title: Updates and component upgrades in Azure Site Recovery
-description: Provides an overview of Azure Site Recovery service updates, and component upgrades.
+description: Provides an overview of Azure Site Recovery service updates, MARS agent and component upgrades.
We recommend always upgrading to the latest component versions:
Review the latest update rollup (version N) in [this article](site-recovery-whats-new.md). Remember that Site Recovery provides support for N-4 versions. - ## Component expiry Site Recovery notifies you of expired components (or nearing expiry) by email (if you subscribed to email notifications), or on the vault dashboard in the portal.
The example in the table shows how this works.
## Between an on-premises VMM site and Azure+ 1. Download the update for the Microsoft Azure Site Recovery Provider. 2. Install the Provider on the VMM server. If VMM is deployed in a cluster, install the Provider on all cluster nodes.
-3. Install the latest Microsoft Azure Recovery Services agent on all Hyper-V hosts or cluster nodes.
-
+3. Install the latest Microsoft Azure Recovery Services agent (MARS for Azure Site Recovery) on all Hyper-V hosts or cluster nodes.
## Between two on-premises VMM sites+ 1. Download the latest update for the Microsoft Azure Site Recovery Provider. 2. Install the latest Provider on the VMM server managing the secondary recovery site. If VMM is deployed in a cluster, install the Provider on all cluster nodes. 3. After the recovery site is updated, install the Provider on the VMM server that's managing the primary site. ## Next steps
-Follow our [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page to track new updates and releases.
+Follow our [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page to track new updates and releases.
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Previously updated : 08/01/2023 Last updated : 08/16/2023 # Add Azure Automation runbooks to recovery plans
spring-apps How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-palo-alto.md
This article describes how to use Azure Spring Apps with a Palo Alto firewall.
-For example, the [Azure Spring Apps reference architecture](./reference-architecture.md) includes an Azure Firewall to secure your applications. However, if your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
+If your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
-You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
+You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
-> [!Note]
+> [!NOTE]
> In describing the use of REST APIs, this article uses the PowerShell variable syntax to indicate names and values that are left to your discretion. Be sure to use the same values in all the steps. > > After you've configured the TLS/SSL certificate in Palo Alto, remove the `-SkipCertificateCheck` argument from all Palo Alto REST API calls in the examples below.
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md
The following example shows how to add rules to your firewall. For more informat
az network firewall network-rule create \ --resource-group $RG \ --firewall-name $FWNAME \
- --collection-name 'asafwnr' -n 'apiudp' \
- --protocols 'UDP' \
- --source-addresses '*' \
- --destination-addresses "AzureCloud" \
- --destination-ports 1194 \
- --action allow \
- --priority 100
-az network firewall network-rule create \
- --resource-group $RG \
- --firewall-name $FWNAME \
- --collection-name 'asafwnr' -n 'springcloudtcp' \
+ --collection-name 'asafwnr' \
+ --name 'springcloudtcp' \
--protocols 'TCP' \ --source-addresses '*' \ --destination-addresses "AzureCloud" \ --destination-ports 443 445
-az network firewall network-rule create \
- --resource-group $RG \
- --firewall-name $FWNAME \
- --collection-name 'asafwnr' \
- --name 'time' \
- --protocols 'UDP' \
- --source-addresses '*' \
- --destination-fqdns 'ntp.ubuntu.com' \
- --destination-ports 123
# Add firewall application rules.
az network firewall application-rule create \
--collection-name 'aksfwar'\ --name 'fqdn' \ --source-addresses '*' \
- --protocols 'http=80' 'https=443' \
+ --protocols 'https=443' \
--fqdn-tags "AzureKubernetesService" \ --action allow --priority 100 ```
spring-apps Quickstart Configure Single Sign On Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-configure-single-sign-on-enterprise.md
To complete the single sign-on experience, use the following steps to deploy the
--name identity-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name identity-service \
- --routes-file azure/routes/identity-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/identity-service.json
``` ## Configure single sign-on for Spring Cloud Gateway
spring-apps Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps-enterprise.md
Use the following steps to deploy and build applications. For these steps, make
--resource-group <resource-group-name> \ --name quickstart-builder \ --service <Azure-Spring-Apps-service-instance-name> \
- --builder-file azure/builder.json
+ --builder-file azure-spring-apps-enterprise/resources/json/tbs/builder.json
``` 1. Use the following command to build and deploy the payment service:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name cart-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name cart-service \
- --routes-file azure/routes/cart-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/cart-service.json
``` 1. Use the following command to create routes for the order service:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name order-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name order-service \
- --routes-file azure/routes/order-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/order-service.json
``` 1. Use the following command to create routes for the catalog service:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name catalog-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name catalog-service \
- --routes-file azure/routes/catalog-service.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/catalog-service.json
``` 1. Use the following command to create routes for the frontend:
Use the following steps to configure Spring Cloud Gateway and configure routes t
--name frontend-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name frontend \
- --routes-file azure/routes/frontend.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/frontend.json
``` 1. Use the following commands to retrieve the URL for Spring Cloud Gateway:
spring-apps Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-azure-cli.md
The Enterprise deployment plan includes the following Tanzu components:
## Review the Azure CLI deployment script
-The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
+The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Standard plan](#tab/azure-spring-apps-standard)
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-bicep.md
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md
For more customization including custom domain support, see the [Azure Spring Ap
## Review the Terraform plan
-The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
+The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Standard plan](#tab/azure-spring-apps-standard)
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet.md
The Enterprise deployment plan includes the following Tanzu components:
## Review the template
-The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](reference-architecture.md).
+The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Standard plan](#tab/azure-spring-apps-standard)
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
* Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md).
spring-apps Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-and-redis-enterprise.md
The following instructions describe how to provision an Azure Cache for Redis an
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure/templates/azuredeploy.json).
+You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure-spring-apps-enterprise/resources/json/deploy/azuredeploy.json).
To deploy this template, follow these steps: 1. Select the following image to sign in to Azure and open a template. The template creates an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server.
- :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure%2Ftemplates%2Fazuredeploy.json":::
+ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure-spring-apps-enterprise%2Fresources%2Fjson%2Fdeploy%2Fazuredeploy.json":::
1. Enter values for the following fields:
spring-apps Quickstart Set Request Rate Limits Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-set-request-rate-limits-enterprise.md
az spring gateway route-config update \
--service <Azure-Spring-Apps-service-instance-name> \ --name catalog-routes \ --app-name catalog-service \
- --routes-file azure/routes/catalog-service_rate-limit.json
+ --routes-file azure-spring-apps-enterprise/resources/json/routes/catalog-service_rate-limit.json
``` Use the following commands to retrieve the URL for the `/products` route in Spring Cloud Gateway:
spring-apps Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/reference-architecture.md
- Previously updated : 05/31/2022-- Title: Azure Spring Apps reference architecture---
-description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps.
--
-# Azure Spring Apps reference architecture
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ✔️ Standard ✔️ Enterprise
-
-This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. In the design, Azure Spring Apps is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16].
-
-There are two flavors of Azure Spring Apps: Standard plan and Enterprise plan.
-
-The Azure Spring Apps Standard plan is composed of the Spring Cloud Config Server, the Spring Cloud Service Registry, and the kpack build service.
-
-The Azure Spring Apps Enterprise plan is composed of the VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, VMware Tanzu® Service Registry, Spring Cloud Gateway for VMware Tanzu®, and API portal for VMware Tanzu®.
-
-For an implementation of this architecture, see the [Azure Spring Apps Reference Architecture][10] on GitHub.
-
-Deployment options for this architecture include Azure Resource Manager (ARM), Terraform, Azure CLI, and Bicep. The artifacts in this repository provide a foundation that you can customize for your environment. You can group resources such as Azure Firewall or Application Gateway into different resource groups or subscriptions. This grouping helps keep different functions separate, such as IT infrastructure, security, business application teams, and so on.
-
-## Planning the address space
-
-Azure Spring Apps requires two dedicated subnets:
-
-* Service runtime
-* Spring Boot applications
-
-Each of these subnets requires a dedicated Azure Spring Apps cluster. Multiple clusters can't share the same subnets. The minimum size of each subnet is /28. The number of application instances that Azure Spring Apps can support varies based on the size of the subnet. You can find the detailed virtual network requirements in the [Virtual network requirements][11] section of [Deploy Azure Spring Apps in a virtual network][17].
-
-> [!WARNING]
-> The selected subnet size can't overlap with the existing virtual network address space, and shouldn't overlap with any peered or on-premises subnet address ranges.
-
-## Use cases
-
-Typical uses for this architecture include:
-
-* Private applications: Internal applications deployed in hybrid cloud environments
-* Public applications: Externally facing applications
-
-These use cases are similar except for their security and network traffic rules. This architecture is designed to support the nuances of each.
-
-## Private applications
-
-The following list describes the infrastructure requirements for private applications. These requirements are typical in highly regulated environments.
-
-* A subnet must only have one instance of Azure Spring Apps.
-* Adherence to at least one Security Benchmark should be enforced.
-* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* Azure service dependencies should communicate through Service Endpoints or Private Link.
-* Data at rest should be encrypted.
-* Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
-* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* If [Azure Spring Apps Config Server][8] is used to load config properties from a repository, the repository must be private.
-* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault.
-* Name resolution of hosts on-premises and in the Cloud should be bidirectional.
-* No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
-* Subnets managed by the Azure Spring Apps deployment must not be modified.
-
-The following list shows the components that make up the design:
-
-* On-premises network
- * Domain Name Service (DNS)
- * Gateway
-* Hub subscription
- * Application Gateway Subnet
- * Azure Firewall Subnet
- * Shared Services Subnet
-* Connected subscription
- * Azure Bastion Subnet
- * Virtual Network Peer
-
-The following list describes the Azure services in this reference architecture:
-
-* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
-
-* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
-
-* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-
-* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-
-The following diagrams represent a well-architected hub and spoke design that addresses the above requirements:
-
-### [Standard plan](#tab/azure-spring-standard)
--
-### [Enterprise plan](#tab/azure-spring-enterprise)
----
-## Public applications
-
-The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments.
-
-* A subnet must only have one instance of Azure Spring Apps.
-* Adherence to at least one Security Benchmark should be enforced.
-* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* Azure DDoS Protection should be enabled.
-* Azure service dependencies should communicate through Service Endpoints or Private Link.
-* Data at rest should be encrypted.
-* Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
-* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* Ingress traffic should be managed by at least Application Gateway or Azure Front Door.
-* Internet routable addresses should be stored in Azure Public DNS.
-* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault.
-* Name resolution of hosts on-premises and in the Cloud should be bidirectional.
-* No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
-* Subnets managed by the Azure Spring Apps deployment must not be modified.
-
-The following list shows the components that make up the design:
-
-* On-premises network
- * Domain Name Service (DNS)
- * Gateway
-* Hub subscription
- * Application Gateway Subnet
- * Azure Firewall Subnet
- * Shared Services Subnet
-* Connected subscription
- * Azure Bastion Subnet
- * Virtual Network Peer
-
-The following list describes the Azure services in this reference architecture:
-
-* [Azure Application Firewall][7]: a feature of Azure Application Gateway that provides centralized protection of applications from common exploits and vulnerabilities.
-
-* [Azure Application Gateway][6]: a load balancer responsible for application traffic with Transport Layer Security (TLS) offload operating at layer 7.
-
-* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
-
-* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
-
-* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-
-* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-
-The following diagrams represent a well-architected hub and spoke design that addresses the above requirements. Only the hub-virtual-network communicates with the internet:
-
-### [Standard plan](#tab/azure-spring-standard)
--
-### [Enterprise plan](#tab/azure-spring-enterprise)
----
-## Azure Spring Apps on-premises connectivity
-
-Applications in Azure Spring Apps can communicate to various Azure, on-premises, and external resources. By using the hub and spoke design, applications can route traffic externally or to the on-premises network using Express Route or Site-to-Site Virtual Private Network (VPN).
-
-## Azure Well-Architected Framework considerations
-
-The [Azure Well-Architected Framework][16] is a set of guiding tenets to follow in establishing a strong infrastructure foundation. The framework contains the following categories: cost optimization, operational excellence, performance efficiency, reliability, and security.
-
-### Cost optimization
-
-Because of the nature of distributed system design, infrastructure sprawl is a reality. This reality results in unexpected and uncontrollable costs. Azure Spring Apps is built using components that scale so that it can meet demand and optimize cost. The core of this architecture is the Azure Kubernetes Service (AKS). The service is designed to reduce the complexity and operational overhead of managing Kubernetes, which includes efficiencies in the operational cost of the cluster.
-
-You can deploy different applications and application types to a single instance of Azure Spring Apps. The service supports autoscaling of applications triggered by metrics or schedules that can improve utilization and cost efficiency.
-
-You can also use Application Insights and Azure Monitor to lower operational cost. With the visibility provided by the comprehensive logging solution, you can implement automation to scale the components of the system in real time. You can also analyze log data to reveal inefficiencies in the application code that you can address to improve the overall cost and performance of the system.
-
-### Operational excellence
-
-Azure Spring Apps addresses multiple aspects of operational excellence. You can combine these aspects to ensure that the service runs efficiently in production environments, as described in the following list:
-
-* You can use Azure Pipelines to ensure that deployments are reliable and consistent while helping you avoid human error.
-* You can use Azure Monitor and Application Insights to store log and telemetry data.
- You can assess collected log and metric data to ensure the health and performance of your applications. Application Performance Monitoring (APM) is fully integrated into the service through a Java agent. This agent provides visibility into all the deployed applications and dependencies without requiring extra code. For more information, see the blog post [Effortlessly monitor applications and dependencies in Azure Spring Apps][15].
-* You can use Microsoft Defender for Cloud to ensure that applications maintain security by providing a platform to analyze and assess the data provided.
-* The service supports various deployment patterns. For more information, see [Set up a staging environment in Azure Spring Apps][14].
-
-### Reliability
-
-Azure Spring Apps is built on AKS. While AKS provides a level of resiliency through clustering, this reference architecture goes even further by incorporating services and architectural considerations to increase availability of the application if there's component failure.
-
-By building on top of a well-defined hub and spoke design, the foundation of this architecture ensures that you can deploy it to multiple regions. For the private application use case, the architecture uses Azure Private DNS to ensure continued availability during a geographic failure. For the public application use case, Azure Front Door and Azure Application Gateway ensure availability.
-
-### Security
-
-The security of this architecture is addressed by its adherence to industry-defined controls and benchmarks. In this context, "control" means a concise and well-defined best practice, such as "Employ the least privilege principle when implementing information system access. IAM-05" The controls in this architecture are from the [Cloud Control Matrix][19] (CCM) by the [Cloud Security Alliance][18] (CSA) and the [Microsoft Azure Foundations Benchmark][20] (MAFB) by the [Center for Internet Security][21] (CIS). In the applied controls, the focus is on the primary security design principles of governance, networking, and application security. It is your responsibility to handle the design principles of Identity, Access Management, and Storage as they relate to your target infrastructure.
-
-#### Governance
-
-The primary aspect of governance that this architecture addresses is segregation through the isolation of network resources. In the CCM, DCS-08 recommends ingress and egress control for the datacenter. To satisfy the control, the architecture uses a hub and spoke design using Network Security Groups (NSGs) to filter east-west traffic between resources. The architecture also filters traffic between central services in the hub and resources in the spoke. The architecture uses an instance of Azure Firewall to manage traffic between the internet and the resources within the architecture.
-
-The following list shows the control that addresses datacenter security in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-|:-|:--|
-| DCS-08 | Datacenter Security Unauthorized Persons Entry |
-
-#### Network
-
-The network design supporting this architecture is derived from the traditional hub and spoke model. This decision ensures that network isolation is a foundational construct. CCM control IVS-06 recommends that traffic between networks and virtual machines are restricted and monitored between trusted and untrusted environments. This architecture adopts the control by implementation of the NSGs for east-west traffic (within the "data center"), and the Azure Firewall for north-south traffic (outside of the "data center"). CCM control IPY-04 recommends that the infrastructure should use secure network protocols for the exchange of data between services. The Azure services supporting this architecture all use standard secure protocols such as TLS for HTTP and SQL.
-
-The following list shows the CCM controls that address network security in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-| :-- | :-|
-| IPY-04 | Network Protocols |
-| IVS-06 | Network Security |
-
-The network implementation is further secured by defining controls from the MAFB. The controls ensure that traffic into the environment is restricted from the public Internet.
-
-The following list shows the CIS controls that address network security in this reference:
-
-| CIS Control ID | CIS Control Description |
-|:|:|
-| 6.2 | Ensure that SSH access is restricted from the internet. |
-| 6.3 | Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP). |
-| 6.5 | Ensure that Network Watcher is 'Enabled'. |
-| 6.6 | Ensure that ingress using UDP is restricted from the internet. |
-
-Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. You must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
-
-#### Application security
-
-This design principle covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
-
-The following list shows the CCM controls that address key management in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-|:-|:--|
-| EKM-01 | Encryption and Key Management Entitlement |
-| EKM-02 | Encryption and Key Management Key Generation |
-| EKM-03 | Encryption and Key Management Sensitive Data Protection |
-| EKM-04 | Encryption and Key Management Storage and Access |
-
-From the CCM, EKM-02, and EKM-03 recommend policies and procedures to manage keys and to use encryption protocols to protect sensitive data. EKM-01 recommends that all cryptographic keys have identifiable owners so that they can be managed. EKM-04 recommends the use of standard algorithms.
-
-The following list shows the CIS controls that address key management in this reference:
-
-| CIS Control ID | CIS Control Description |
-|:|:-|
-| 8.1 | Ensure that the expiration date is set on all keys. |
-| 8.2 | Ensure that the expiration date is set on all secrets. |
-| 8.4 | Ensure the key vault is recoverable. |
-
-The CIS controls 8.1 and 8.2 recommend that expiration dates are set for credentials to ensure that rotation is enforced. CIS control 8.4 ensures that the contents of the key vault can be restored to maintain business continuity.
-
-The aspects of application security set a foundation for the use of this reference architecture to support a Spring workload in Azure.
-
-## Next steps
-
-Explore this reference architecture through the ARM, Terraform, and Azure CLI deployments available in the [Azure Spring Apps Reference Architecture][10] repository.
-
-<!-- Reference links in article -->
-[1]: ./index.yml
-[2]: ../key-vault/index.yml
-[3]: ../azure-monitor/index.yml
-[4]: ../security-center/index.yml
-[5]: /azure/devops/pipelines/
-[6]: ../application-gateway/index.yml
-[7]: ../web-application-firewall/index.yml
-[8]: ./how-to-config-server.md
-[9]: https://steeltoe.io/
-[10]: https://github.com/Azure/azure-spring-apps-landing-zone-accelerator/tree/reference-architecture
-[11]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[12]: ./vnet-customer-responsibilities.md#azure-spring-apps-network-requirements
-[13]: ./vnet-customer-responsibilities.md#azure-spring-apps-fqdn-requirements--application-rules
-[14]: ./how-to-staging-environment.md
-[15]: https://devblogs.microsoft.com/java/monitor-applications-and-dependencies-in-azure-spring-cloud/
-[16]: /azure/architecture/framework/
-[17]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[18]: https://cloudsecurityalliance.org/
-[19]: https://cloudsecurityalliance.org/research/working-groups/cloud-controls-matrix
-[20]: /azure/security/benchmarks/v2-cis-benchmark
-[21]: https://www.cisecurity.org/
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw
- [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/) - [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml)-- [Azure Spring Apps reference architecture](reference-architecture.md)
+- [Azure Spring Apps architecture design](/azure/architecture/web-apps/spring-apps?toc=/azure/spring-apps/toc.json&bc=/azure/spring-apps/breadcrumb/toc.json)
- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-apps) applications to Azure Spring Apps
spring-apps Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Apps service
| Destination Endpoint | Port | Use | Note | |-||-|--| | \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |
-| \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | |
| \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
|--|--|| | <i>*.azmk8s.io</i> | HTTPS:443 | Underlying Kubernetes Cluster management. | | <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |
-| <i>*.cdn.mscr.io</i> | HTTPS:443 | MCR storage backed by the Azure CDN. |
| <i>*.data.mcr.microsoft.com</i> | HTTPS:443 | MCR storage backed by the Azure CDN. | | <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |
-| <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
-| <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. |
+| <i>login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. |
| <i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. | | <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI. |
-| *mscrl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
-| *crl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. |
-| *crl3.digicert.com*<sup>1</sup> | HTTPS:80 | Third-Party TLS/SSL Certificate Chain Paths. |
-
-<sup>1</sup> Please note that these FQDNs aren't included in the FQDN tag.
## Azure Spring Apps optional FQDN for third-party application performance management
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
The agent displays detailed progress. Once the registration is complete, you're
To accomplish seamless authentication with Azure and authorization to various Azure resources, the agent is registered with the following Azure - Azure Storage Mover (Microsoft.StorageMover)-- Azure ARC (Microsoft.HybridCompute)
+- Azure Arc (Microsoft.HybridCompute)
### Azure Storage Mover service
Registration to the Azure Storage mover service is visible and manageable throug
You can reference this Azure Resource Manager (ARM) resource when you want to assign migration jobs to the specific agent VM it symbolizes.
-### Azure ARC service
+### Azure Arc service
-The agent is also registered with the [Azure ARC service](../azure-arc/overview.md). ARC is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent.
+The agent is also registered with the [Azure Arc service](../azure-arc/overview.md). Arc is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent.
Azure Storage Mover uses a system-assigned managed identity. A managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is also automatically removed. The process of deletion is automatically initiated when you unregister the agent. However, there are other ways to remove this identity. Doing so incapacitates the registered agent and require the agent to be unregistered. Only the registration process can get an agent to obtain and maintain its Azure identity properly. > [!NOTE]
-> During public preview, there is a side effect of the registration with the Azure ARC service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource.
+> During public preview, there is a side effect of the registration with the Azure Arc service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource.
It may appear that you're able to manage aspects of the storage mover agent through the *Server-Azure Arc* resource, but in most cases you can't. It's best to exclusively manage the agent through the *Registered agents* pane in your storage move resource or through the local administrative shell. > [!WARNING]
-> Do not delete the Azure ARC server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to.
+> Do not delete the Azure Arc server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to.
### Authorization
For a migration job, access to the target endpoint is perhaps the most important
These assignments are made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it. > [!WARNING]
-> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure ARC services.
+> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure Arc services.
## Next steps
storage-mover Endpoint Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/endpoint-manage.md
Previously updated : 08/07/2023 Last updated : 08/18/2023
REVIEW Engineering: not reviewed
EDIT PASS: started Initial doc score: 93
-Current doc score: 100 (3269 words and 0 issues)
+Current doc score: 100 (3365 words and 0 issues)
!######################################################## -->
Current doc score: 100 (3269 words and 0 issues)
While the term *endpoint* is often used in networking, it's used in the context of the Storage Mover service to describe a storage location with a high level of detail.
-A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition. Only certain types of endpoints may be used as a source or a target, respectively.
+A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition to define the source and target locations for a particular copy operation. Only certain types of endpoints may be used as a source or a target, respectively. For example, data contained within an NFS (Network File System) file share endpoint can only be copied to a blob storage container. Similarly, copy operations with an SMB-based (Server Message Block) file share target can only be migrated to an Azure file share,
This article guides you through the creation and management of Azure Storage Mover endpoints. To follow these examples, you need a top-level storage mover resource. If you haven't yet created one, follow the steps within the [Create a Storage Mover resource](storage-mover-create.md) article before continuing.
After you complete the steps within this article, you'll be able to create and m
Within the Azure Storage Mover resource hierarchy, a migration project is used to organize migration jobs into logical tasks or components. A migration project in turn contains at least one job definition, which describes both the source and target locations for your migration project. The [Understanding the Storage Mover resource hierarchy](resource-hierarchy.md) article contains more detailed information about the relationships between a Storage Mover, its endpoints, and its projects.
-Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB (Server Message Block) shares, and Azure Storage blob container endpoints each require fundamentally different information.
+Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information.
[!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
Agent access to both your Key Vault and target storage resources is controlled t
There are many use cases that require preserving metadata values such as file and folder timestamps, ACLs, and file attributes. Storage Mover supports the same level of file fidelity as the underlying Azure file share. Azure Files in turn [supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). The following table represents common metadata that is migrated:
-|Metadata property |Outcome |
-|--|--|
+|Metadata property |Outcome |
+|--||
|Directory structure |The original directory structure of the source is preserved on the target share. |
-|Access permissions |Permissions on the source file or directory are preserved on the target share. |
-|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. |
+|Access permissions |Permissions on the source file or directory are preserved on the target share. |
+|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. |
|Create timestamp |The original create timestamp of the source file is preserved on the target share. | |Change timestamp |The original change timestamp of the source file is preserved on the target share. | |Modified timestamp |The original modified timestamp of the source file is preserved on the target share. |
Follow the steps in this section to view endpoints accessible to your Storage Mo
1. On the **Storage endpoints** page, the default **Storage endpoints** view displays the names of any provisioned source endpoints and a summary of their associated properties. To view provisioned destination endpoint, select **Target endpoints**. You can filter the results further by selecting the **Protocol** or **Host** filters and the relevant option.
- :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing the endpoint details and the location of the target endpoint filters." lightbox="media/endpoint-manage/endpoint-filter-lrg.png":::
+ :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing endpoint details and the target endpoint filters location." lightbox="media/endpoint-manage/endpoint-filter-lrg.png":::
- At this time, the Azure Portal doesn't provide the ability to to directly modify provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure Portal should be deleted and recreated.
+ At this time, the Azure portal doesn't support the direct modification of provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure portal should be deleted and recreated.
### [PowerShell](#tab/powershell)
storage Blob V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-dotnet.md
description: View code samples that use the Azure Blob Storage client library for .NET version 11.x. -+ Last updated 04/03/2023
storage Blob V11 Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-javascript.md
description: View code samples that use the Azure Blob Storage client library for JavaScript version 11.x. -+ Last updated 04/03/2023
storage Blob V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v2-samples-python.md
description: View code samples that use the Azure Blob Storage client library for Python version 2.1. -+ Last updated 04/03/2023
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
description: The Blob Storage client library supports client-side encryption and
-+ Last updated 12/12/2022
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
To set file and directory level permissions, see any of the following articles:
|REST API |[Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update)| > [!IMPORTANT]
-> If the security principal is a *service* principal, it's important to use the object ID of the service principal and not the object ID of the related app registration. To get the object ID of the service principal open the Azure CLI, and then use this command: `az ad sp show --id <Your App ID> --query objectId`. make sure to replace the `<Your App ID>` placeholder with the App ID of your app registration.
+> If the security principal is a *service* principal, it's important to use the object ID of the service principal and not the object ID of the related app registration. To get the object ID of the service principal open the Azure CLI, and then use this command: `az ad sp show --id <Your App ID> --query objectId`. Make sure to replace the `<Your App ID>` placeholder with the App ID of your app registration. The service principal is treated as a named user. You'll add this ID to the ACL as you would any named user. Named users are described later in this article.
## Types of ACLs
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
Previously updated : 06/23/2021 Last updated : 08/18/2023 -+ # Known issues with Network File System (NFS) 3.0 protocol support for Azure Blob Storage
This article describes limitations and known issues of Network File System (NFS)
- GRS, GZRS, and RA-GRS redundancy options aren't supported when you create an NFS 3.0 storage account.
+- Access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if the ACL or a blob or directory contains an entry for a named user or group, that file becomes inaccessible on the client for non-root users. You'll have to remove these entries to restore access to non-root users on the client. For information about how to remove an ACL entry for named users and groups, see [How to set ACLs](data-lake-storage-access-control.md#how-to-set-acls).
+ ## NFS 3.0 features The following NFS 3.0 features aren't yet supported.
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Previously updated : 06/21/2023 Last updated : 08/18/2023 -+ # Mount Blob Storage by using the Network File System (NFS) 3.0 protocol
Your storage account must be contained within a virtual network. A virtual netwo
## Step 2: Configure network security
-Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs), are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
+Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. See [Network security recommendations for Blob storage](security-recommendations.md#networking).
-To secure the data in your account, see these recommendations: [Network security recommendations for Blob storage](security-recommendations.md#networking).
+Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if you add an entry for a named user or group to the ACL of a blob or directory, that file becomes inaccessible on the client for non-root users. You would have to remove that entry to restore access to non-root users on the client.
> [!IMPORTANT] > The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports.
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md
Previously updated : 02/14/2023 Last updated : 08/18/2023 -+ # Network File System (NFS) 3.0 protocol support for Azure Blob Storage
For step-by-step guidance, see [Mount Blob storage by using the Network File Sys
## Network security
-Traffic must originate from a VNet. A VNet enables clients to securely connect to your storage account. The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them.
+Traffic must originate from a VNet. A VNet enables clients to securely connect to your storage account. The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request.
To learn more, see [Network security recommendations for Blob storage](security-recommendations.md#networking).
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication isn't supported for blobs in the source account that are encr
Customer-managed failover isn't supported for either the source or the destination account in an object replication policy.
-Object replication is not supported for blobs that are uploaded to the Data Lake Storage endpoint (`dfs.core.windows.net`) by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
+Object replication is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
## How object replication works
storage Sas Service Create Dotnet Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Java Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-javascript.md
description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for JavaScript. -+ Last updated 01/19/2023
storage Sas Service Create Python Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Sas Service Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Simulate Primary Region Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/simulate-primary-region-failure.md
description: Simulate an error in reading data from the primary region when the storage account is configured for read-access geo-zone-redundant storage (RA-GZRS). -+ Last updated 09/06/2022
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
description: Learn how to use the .NET client library to create a read-only snap
-+ Last updated 08/27/2020 ms.devlang: csharp
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
-+ Last updated 03/15/2023
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
description: Create and use account SAS tokens in a JavaScript application that
-+ Last updated 11/30/2022
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Last updated 03/28/2022-+ ms.devlang: csharp, python
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
description: Learn how to create and manage clients that interact with data reso
-+ Last updated 02/08/2023
storage Storage Blob Container Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 08/02/2023
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. -+ Last updated 11/30/2022
storage Storage Blob Container Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 08/02/2023
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library using TypeScript. -+ Last updated 03/21/2023
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 07/25/2022
storage Storage Blob Container Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 08/02/2023
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
-+ Last updated 11/30/2022 ms.devlang: javascript
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 08/02/2023
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
-+ Last updated 03/21/2023 ms.devlang: TypeScript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
-+ Last updated 03/28/2022
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 08/02/2023
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 08/02/2023
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Container User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/22/2023
storage Storage Blob Container User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/12/2023
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/09/2023
storage Storage Blob Containers List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 08/02/2023
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 08/02/2023
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Async Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md
Last updated 04/18/2023-+ ms.devlang: java
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
Last updated 04/28/2023-+ ms.devlang: python
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Url Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Last updated 04/14/2023-+ ms.devlang: csharp
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
-+ Last updated 07/15/2022
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Last updated 05/11/2023-+ ms.devlang: csharp
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
-+ Last updated 07/12/2023 ms.devlang: csharp
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Last updated 04/21/2023-+ ms.devlang: javascript
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Last updated 05/23/2023-+ ms.devlang: csharp
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
Last updated 09/13/2022-+ ms.devlang: javascript
storage Storage Blob Get Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
-+ Last updated 11/30/2022
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Object Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-object-model.md
-+ Last updated 03/07/2023
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Query Endpoint Srp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md
-+ Last updated 06/07/2023
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Typescript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md
-+ Last updated 03/21/2023
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
Last updated 06/20/2023-+ ms.devlang: javascript
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
Last updated 07/07/2023-+ ms.devlang: csharp
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
-+ Last updated 07/03/2023 ms.devlang: csharp
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
-+ Last updated 06/28/2023 ms.devlang: javascript
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
-+ Last updated 06/28/2023 ms.devlang: typescript
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/22/2023
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/12/2023
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/06/2023
storage Storage Blobs List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: java
To list the blobs in a storage account, call one of these methods:
- [listBlobs](/java/api/com.azure.storage.blob.BlobContainerClient) - [listBlobsByHierarchy](/java/api/com.azure.storage.blob.BlobContainerClient)
+### Manage how many results are returned
+
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for Java](/azure/developer/java/sdk/pagination).
+
+### Filter results with a prefix
+
+To filter the list of blobs, pass a string as the `prefix` parameter to [ListBlobsOptions.setPrefix(String prefix)](/java/api/com.azure.storage.blob.models.listblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+ ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
Page 3
Name: folderA/folderB/file3.txt, Is deleted? false ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-java.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
-+ Previously updated : 11/30/2022 Last updated : 08/16/2023 ms.devlang: javascript
Related functionality can be found in the following methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results).
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
```javascript const listOptions = {
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
Flat listing: 5: folder2/sub1/c
Flat listing: 6: folder2/sub1/d ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: python
To list the blobs in a container using a hierarchical listing, call the followin
- [ContainerClient.walk_blobs](/python/api/azure-storage-blob/azure.storage.blob.containerclient#azure-storage-blob-containerclient-walk-blobs) (along with the name, you can optionally include metadata, tags, and other information associated with each blob)
+### Filter results with a prefix
+
+To filter the list of blobs, specify a string for the `name_starts_with` keyword argument. The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+ ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
Name: folderA/file2.txt
Name: folderA/folderB/file3.txt ```
-You can also specify options to filter list results or show additional information. The following example lists blobs with a specified prefix, and also lists blob tags:
+You can also specify options to filter list results or show additional information. The following example lists blobs and blob tags:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py" id="Snippet_list_blobs_flat_options":::
Name: folderA/file2.txt, Tags: None
Name: folderA/folderB/file3.txt, Tags: {'tag1': 'value1', 'tag2': 'value2'} ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-python.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
-+ Previously updated : 03/21/2023 Last updated : 08/16/2023 ms.devlang: typescript
Related functionality can be found in the following methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results)
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
```typescript const listOptions: ContainerListBlobsOptions = {
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
Flat listing: 5: folder2/sub1/c
Flat listing: 6: folder2/sub1/d ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
-+ Previously updated : 02/14/2023 Last updated : 08/16/2023 ms.devlang: csharp
To list the blobs in a storage account, call one of these methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for .NET](/dotnet/azure/sdk/pagination).
### Filter results with a prefix
By default, a listing operation returns blobs in a flat listing. In a flat listi
The following example lists the blobs in the specified container using a flat listing, with an optional segment size specified, and writes the blob name to a console window.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobsFlatListing"::: The sample output is similar to:
Blob name: FolderA/FolderB/FolderC/blob2.txt
Blob name: FolderA/FolderB/FolderC/blob3.txt ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-dotnet.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 07/07/2023 ms.devlang: python
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 12/09/2022 ms.devlang: csharp
storage Storage Create Geo Redundant Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md
description: Use read-access geo-zone-redundant (RA-GZRS) storage to make your a
-+ Last updated 09/02/2022
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
Title: Encrypt and decrypt blobs using Azure Key Vault
description: Learn how to encrypt and decrypt a blob using client-side encryption with Azure Key Vault. -+ Last updated 11/2/2022
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
description: In this quickstart, you will learn how to use the Azure Blob Storag
Last updated 11/09/2022-+ ms.devlang: csharp
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
description: In this quickstart, you learn how to use the Azure Blob Storage cli
Last updated 02/13/2023-+ ms.devlang: golang
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Last updated 10/24/2022-+ ms.devlang: java
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
description: In this quickstart, you learn how to use the Azure Blob Storage for
Last updated 10/28/2022-+ ms.devlang: javascript
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Last updated 10/24/2022 -+ ms.devlang: python
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
description: Learn about retry policies and how to implement them for Blob Storage. This article helps you set up a retry policy for Blob Storage requests using the Azure Storage client library for .NET. -+ Last updated 12/14/2022
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks
-description: Configure layered network security for your storage account by using Azure Storage firewalls and Azure Virtual Network.
+description: Configure layered network security for your storage account by using the Azure Storage firewall.
Previously updated : 08/01/2023 Last updated : 08/15/2023 -+ # Configure Azure Storage firewalls and virtual networks
-Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources that you use.
+Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments require. In this article, you will learn how to configure the Azure Storage firewall to protect the data in your storage account at the network layer.
-When you configure network rules, only applications that request data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests that come from specified IP addresses, IP ranges, subnets in an Azure virtual network, or resource instances of some Azure services.
+> [!IMPORTANT]
+> Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
+>
+> Some operations, such as blob container operations, can be performed through both the control plane and the data plane. So if you attempt to perform an operation such as listing containers from the Azure portal, the operation will succeed unless it is blocked by another mechanism. Attempts to access blob data from an application such as Azure Storage Explorer are controlled by the firewall restrictions.
+>
+> For a list of data plane operations, see the [Azure Storage REST API Reference](/rest/api/storageservices/).
+> For a list of control plane operations, see the [Azure Storage Resource Provider REST API Reference](/rest/api/storagerp/).
-Storage accounts have a public endpoint that's accessible through the internet. You can also create [private endpoints for your storage account](storage-private-endpoints.md). Creating private endpoints assigns a private IP address from your virtual network to the storage account. It helps secure traffic between your virtual network and the storage account over a private link.
+## Configure network access to Azure Storage
-The Azure Storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when you're using private endpoints. Your firewall configuration also enables trusted Azure platform services to access the storage account.
+You can control access to the data in your storage account over network endpoints, or through trusted services or resources in any combination including:
-An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized. The firewall rules remain in effect and will block anonymous traffic.
+- [Allow access from selected virtual network subnets using private endpoints](storage-private-endpoints.md).
+- [Allow access from selected virtual network subnets using service endpoints](#grant-access-from-a-virtual-network).
+- [Allow access from specific public IP addresses or ranges](#grant-access-from-an-internet-ip-range).
+- [Allow access from selected Azure resource instances](#grant-access-from-azure-resource-instances).
+- [Allow access from trusted Azure services](#grant-access-to-trusted-azure-services) (using [Manage exceptions](#manage-exceptions)).
+- [Configure exceptions for logging and metrics services](#manage-exceptions).
-Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service that operates within an Azure virtual network or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
+### About virtual network endpoints
-You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up.
+There are two types of virtual network endpoints for storage accounts:
+- [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+- [Private endpoints](storage-private-endpoints.md)
-## Scenarios
+Virtual network service endpoints are public and accessible via the internet. The Azure Storage firewall provides the ability to control access to your storage account over such public endpoints. When you enable public network access to your storage account, all incoming requests for data are blocked by default. Only applications that request data from allowed sources that you configure in your storage account firewall settings will be able to access your data. Sources can include the source IP address or virtual network subnet of a client, or an Azure service or resource instance through which clients or services access your data. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services, unless you explicitly allow access in your firewall configuration.
-To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific virtual networks. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration helps you build a secure network boundary for your applications.
+A private endpoint uses a private IP address from your virtual network to access a storage account over the Microsoft backbone network. With a private endpoint, traffic between your virtual network and the storage account are secured over a private link. Storage firewall rules only apply to the public endpoints of a storage account, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, you can use the firewall to block all access through the public endpoint.
-You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. You can apply storage firewall rules to existing storage accounts or when you create new storage accounts.
+To help you decide when to use each type of endpoint in your environment, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
-Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
+### How to approach network security for your storage account
-> [!IMPORTANT]
-> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
->
-> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
+To secure your storage account and build a secure network boundary for your applications:
+
+1. Start by disabling all public network access for the storage account under the **Public network access** setting in the storage account firewall.
+1. Where possible, configure private links to your storage account from private endpoints on virtual network subnets where the clients reside that require access to your data.
+1. If client applications require access over the public endpoints, change the **Public network access** setting to **Enabled from selected virtual networks and IP addresses**. Then, as needed:
-Network rules are enforced on all network protocols for Azure Storage, including REST and SMB. To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must configure explicit network rules.
+ 1. Specify the virtual network subnets from which you want to allow access.
+ 1. Specify the public IP address ranges of clients from which you want to allow access, such as those on on-premises networks.
+ 1. Allow access from selected Azure resource instances.
+ 1. Add exceptions to allow access from trusted services required for operations such as backing up data.
+ 1. Add exceptions for logging and metrics.
After you apply network rules, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but they don't grant new access beyond configured network rules.
-Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O. Network rules help protect REST access to page blobs.
+## Restrictions and considerations
+
+Before implementing network security for your storage accounts, review the important restrictions and considerations discussed in this section.
+
+> [!div class="checklist"]
+>
+> - Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
+> - Review the [Restrictions for IP network rules](#restrictions-for-ip-network-rules).
+> - To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must be on a machine within the trusted boundary that you establish when configuring network security rules.
+> - Network rules are enforced on all network protocols for Azure Storage, including REST and SMB.
+> - Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O, but they do help protect REST access to page blobs.
+> - You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by [creating an exception](#manage-exceptions). Firewall exceptions aren't applicable to managed disks, because Azure already manages them.
+> - Classic storage accounts don't support firewalls and virtual networks.
+> - If you delete a subnet that's included in a virtual network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> - When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior. Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
+> - By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. If you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) that you previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services might still have access to the storage account.
+
+### Authorization
-Classic storage accounts don't support firewalls and virtual networks.
+Clients granted access via network rules must continue to meet the authorization requirements of the storage account to access the data. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token.
-You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by creating an exception. The [Manage exceptions](#manage-exceptions) section of this article documents this process. Firewall exceptions aren't applicable with managed disks, because Azure already manages them.
+When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized, but the firewall rules remain in effect and will block anonymous traffic.
## Change the default network access rule
By default, storage accounts accept connections from clients on any network. You
You must set the default rule to **deny**, or network rules have no effect. However, changing this setting can affect your application's ability to connect to Azure Storage. Be sure to grant access to any allowed networks or set up access through a private endpoint before you change this setting. + ### [Portal](#tab/azure-portal) 1. Go to the storage account that you want to secure.
You can enable a [service endpoint](../../virtual-network/virtual-network-servic
Each storage account supports up to 200 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range). > [!IMPORTANT]
-> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
+>
+> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
### Required permissions
Cross-region service endpoints for Azure Storage became generally available in A
Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
-When you're planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
+When you're planning for disaster recovery during a regional outage, create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
Local and cross-region service endpoints can't coexist on the same subnet. To replace existing service endpoints with cross-region ones, delete the existing `Microsoft.Storage` endpoints and re-create them as cross-region endpoints (`Microsoft.Storage.Global`).
If you want to enable access to your storage account from a virtual network or s
6. Select **Save** to apply your changes.
+> [!IMPORTANT]
+> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+ #### [PowerShell](#tab/azure-powershell) 1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
If you want to enable access to your storage account from a virtual network or s
You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic.
+### Restrictions for IP network rules
+ The following restrictions apply to IP address ranges: - IP network rules are allowed only for *public internet* IP addresses.
To learn more about working with storage analytics, see [Use Azure Storage analy
## Next steps Learn more about [Azure network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).- Dig deeper into [Azure Storage security](../blobs/security-recommendations.md).
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
description: Learn how to install Azure Container Storage Preview on an Azure Ku
Previously updated : 08/03/2023 Last updated : 08/18/2023
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- Sign up for the public preview by completing the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp).- - This quickstart requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). - You'll need an AKS cluster with an appropriate [virtual machine type](install-container-storage-aks.md#vm-types). If you don't have one, see [Create an AKS cluster](install-container-storage-aks.md#create-aks-cluster). - You'll need the Kubernetes command-line client, `kubectl`. You can install it locally by running the `az aks install-cli` command.
+- Optional: We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp).
+ ## Install Azure Container Storage Follow these instructions to install Azure Container Storage on your AKS cluster using an installation script.
Follow these instructions to install Azure Container Storage on your AKS cluster
| -g | --resource-group | The resource group name.| | -c  | --cluster-name | The name of the cluster where Azure Container Storage is to be installed.| | -n  | --nodepool-name | The name of the nodepool. Defaults to the first nodepool in the cluster.|
- | -r  | --release-train | The release train for the installation. Defaults to prod.|
+ | -r  | --release-train | The release train for the installation. Defaults to stable.|
For example:
storage Container Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md
description: An overview of Azure Container Storage Preview, a service built nat
Previously updated : 08/02/2023 Last updated : 08/14/2023
Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built natively for containers. It integrates with Kubernetes, allowing you to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters.
-To sign up for Azure Container Storage Preview, complete the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). To get started using Azure Container Storage, see [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md) or watch the video.
+To get started using Azure Container Storage, see [Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md) or watch the video.
+
+We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp).
:::row::: :::column:::
- <iframe width="560" height="315" src="https://www.youtube.com/embed/I_2nCQ1FKTU" title="Get started with Azure Container Storage" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
+ > [!VIDEO https://www.youtube.com/embed/I_2nCQ1FKTU]
:::column-end::: :::column::: This video provides an introduction to Azure Container Storage, an end-to-end storage management and orchestration service for stateful applications. See how simple it is to create and manage volumes for production-scale stateful container applications. Learn how to optimize the performance of stateful workloads on Azure Kubernetes Service (AKS) to effectively scale across storage services while providing a cost-effective container-native experience.
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
description: Learn how to install Azure Container Storage Preview for use with A
Previously updated : 08/02/2023 Last updated : 08/14/2023
- Take note of your Azure subscription ID. We recommend using a subscription on which you have an [Owner](../../role-based-access-control/built-in-roles.md#owner) role. If you don't have access to one, you can still proceed, but you'll need admin assistance to complete the steps in this article. -- Sign up for the public preview by completing the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp).- - This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using the Bash environment in Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. For more information, see [Quickstart for Bash in Azure Cloud Shell](../../cloud-shell/quickstart.md).
+- Optional: We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp).
+ > [!NOTE] > Instead of following the steps in this article, you can install Azure Container Storage Preview using a provided installation script. See [Quickstart: Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md).
storage Use Container Storage With Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md
description: Configure Azure Container Storage Preview for use with Azure Elasti
Previously updated : 07/03/2023 Last updated : 08/14/2023
## Regional availability
-Azure Container Storage Preview is only available in the following Azure regions:
--- East US-- West Europe-- West US 2-- West US 3
+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central.
## Create a storage pool
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
description: Configure Azure Container Storage Preview for use with Ephemeral Di
Previously updated : 07/03/2023 Last updated : 08/14/2023
## Regional availability
-Azure Container Storage Preview is only available in the following Azure regions:
--- East US-- West Europe-- West US 2-- West US 3
+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central.
## Create a storage pool
storage Use Container Storage With Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md
description: Configure Azure Container Storage Preview for use with Azure manage
Previously updated : 07/03/2023 Last updated : 08/14/2023
## Regional availability
-Azure Container Storage Preview is only available in the following Azure regions:
--- East US-- West Europe-- West US 2-- West US 3
+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central.
## Create a storage pool
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
description: Learn how to connect to an Azure Elastic SAN Preview volume an Azur
Previously updated : 04/28/2023 Last updated : 07/11/2023
The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h
## Prerequisites -- Have an [Azure Elastic SAN](elastic-san-create.md) with volumes - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations
After deployment, check the pods status to verify that the driver installed.
```bash kubectl -n kube-system get pod -o wide -l app=csi-iscsi-node ```
-### Configure Elastic SAN Volume Group
-
-To connect an Elastic SAN volume to an AKS cluster, you need to configure Elastic SAN Volume Group to allow access from AKS node pool subnets, follow [Configure Elastic SAN networking Preview](elastic-san-networking.md)
### Get volume information
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 04/24/2023 Last updated : 07/11/2023
In this article, you'll add the Storage service endpoint to an Azure virtual net
## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Networking configuration
-
-To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-
-### Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
-```
--
-### Configure volume group networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Create**.
-1. Add an existing virtual network and subnet and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
-virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
-
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
-- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 04/24/2023 Last updated : 07/11/2023
In this article, you'll add the Storage service endpoint to an Azure virtual net
## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Configure networking
-
-To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-
-### Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
-```
--
-### Configure volume group networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Create**.
-1. Add an existing virtual network and subnet and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
-virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
-
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
-- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure p
Previously updated : 08/14/2023 Last updated : 08/16/2023
This article explains how to deploy and configure an elastic storage area networ
# [PowerShell](#tab/azure-powershell)
+Replace all placeholder text with your own values when assigning values to variables and use the same variables in of all the examples in this article:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are to be deployed. |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. |
+| `<VolumeName>` | The name of the Elastic SAN Volume to be created. |
+| `<Location>` | The region where new resources will be created. |
+ The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`. ```azurepowershell ## Variables
-$rgName = "yourResourceGroupName"
+$RgName = "<ResourceGroupName>"
## Select the same availability zone as where you plan to host your workload
-$zone = 1
+$Zone = 1
## Select the same region as your Azure virtual network
-$region = "yourRegion"
-$sanName = "desiredSANName"
-$volGroupName = "desiredVolumeGroupName"
-$volName = "desiredVolumeName"
+$Location = "<Location>"
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+$VolumeName = "<VolumeName>"
## Create the SAN, itself
-New-AzElasticSAN -ResourceGroupName $rgName -Name $sanName -AvailabilityZone $zone -Location $region -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
+New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -AvailabilityZone $Zone -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
``` # [Azure CLI](#tab/azure-cli)
+Replace all placeholder text with your own values when assigning values to variables and use the same variables in of all the examples in this article:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are to be deployed. |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. |
+| `<VolumeName>` | The name of the Elastic SAN Volume to be created. |
+| `<Location>` | The region where new resources will be created. |
+ The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`. ```azurecli ## Variables
-sanName="yourSANNameHere"
-resourceGroupName="yourResourceGroupNameHere"
-sanLocation="desiredRegion"
-volumeGroupName="desiredVolumeGroupName"
+RgName="<ResourceGroupName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+Location="<Location>"
-az elastic-san create -n $sanName -g $resourceGroupName -l $sanLocation --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}"
+az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}"
```
Now that you've configured the basic settings and provisioned your storage, you
# [PowerShell](#tab/azure-powershell)
+The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
```azurepowershell ## Create the volume group, this script only creates one.
-New-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSANName $sanName -Name $volGroupName
+New-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSANName $EsanName -Name $EsanVgName
``` # [Azure CLI](#tab/azure-cli)
+The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
+ ```azurecli
-az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n $volumeGroupName
+az elastic-san volume-group create --elastic-san-name $EsanName -g $RgName -n $EsanVgName
```
Volumes are usable partitions of the SAN's total capacity, you must allocate a p
# [PowerShell](#tab/azure-powershell)
-In this article, we provide you the command to create a single volume. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md).
+The following sample command creates a single volume in the Elastic SAN volume group you created previously. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md). Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
> [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
-Replace `volumeName` with the name you'd like the volume to use, then run the following script:
+Use the same variables you set for , then run the following script:
```azurepowershell ## Create the volume, this command only creates one.
-New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -Name $volName -sizeGiB 2000
+New-AzElasticSanVolume -ResourceGroupName $RgName -ElasticSanName $EsanName -VolumeGroupName $EsanVgName -Name $VolumeName -sizeGiB 2000
``` # [Azure CLI](#tab/azure-cli)
New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -Volu
> [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
-Replace `$volumeName` with the name you'd like the volume to use, then run the following script:
+The following sample command creates an Elastic SAN volume in the Elastic SAN volume group you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
```azurecli
-az elastic-san volume create --elastic-san-name $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib 2000
+az elastic-san volume create --elastic-san-name $EsanName -g $RgName -v $EsanVgName -n $VolumeName --size-gib 2000
```
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN Preview, a service that enables yo
Previously updated : 05/02/2023 Last updated : 08/15/2023
The status of items in this table may change over time.
| Encryption at rest| ✔️ | | Encryption in transit| ⛔ | | [LRS or ZRS redundancy types](elastic-san-planning.md#redundancy)| ✔️ |
-| Private endpoints | Γ¢ö |
+| Private endpoints | ✔️ |
| Grant network access to specific Azure virtual networks| ✔️ | | Soft delete | ⛔ | | Snapshots | ⛔ |
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
+
+ Title: Azure Elastic SAN networking Preview concepts
+description: An overview of Azure Elastic SAN Preview networking options, including storage service endpoints, private endpoints, and iSCSI.
+++ Last updated : 08/16/2023++++
+# Elastic SAN Preview networking
+
+Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+
+You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
+
+Depending on your configuration, applications on peered virtual networks or on-premises networks can also access volumes in the group. On-premises networks must be connected to the virtual network by a VPN or ExpressRoute. For more details about virtual network configurations, see [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+
+There are two types of virtual network endpoints you can configure to allow access to an Elastic SAN volume group:
+
+- [Storage service endpoints](#storage-service-endpoints)
+- [Private endpoints](#private-endpoints)
+
+To decide which option is best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). Generally, you should use private endpoints instead of service endpoints since Private Link offers better capabilities. For more information, see [Azure Private Link](../../private-link/private-endpoint-overview.md).
+
+After configuring endpoints, you can configure network rules to further control access to your Elastic SAN volume group. Once the endpoints and network rules have been configured, clients can connect to volumes in the group to process their workloads.
+
+## Storage service endpoints
+
+[Azure Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
+
+[Cross-region service endpoints for Azure Storage](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints) work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from a subnet to a storage account uses a private IP address as a source IP.
+
+> [!TIP]
+> The original local service endpoints, identified as **Microsoft.Storage**, are still supported for backward compatibility, but you should create cross-region endpoints, identified as **Microsoft.Storage.Global**, for new deployments.
+>
+> Cross-region service endpoints and local ones can't coexist on the same subnet. To use cross-region service endpoints, you might have to delete existing **Microsoft.Storage** endpoints and recreate them as **Microsoft.Storage.Global**.
+
+## Private endpoints
+
+> [!IMPORTANT]
+> Private endpoints for Elastic SAN Preview are currently only supported in France Central.
+
+Azure [Private Link](../../private-link/private-link-overview.md) enables you to access an Elastic SAN volume group securely over a [private endpoint](../../private-link/private-endpoint-overview.md) from a virtual network subnet. Traffic between your virtual network and the service traverses the Microsoft backbone network, eliminating the risk of exposing your service to the public internet. An Elastic SAN private endpoint uses a set of IP addresses from the subnet address space for each volume group. The maximum number used per endpoint is 20.
+
+Private endpoints have several advantages over service endpoints. For a complete comparison of private endpoints to service endpoints, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+Traffic between the virtual network and the Elastic SAN is routed over an optimal path on the Azure backbone network. Unlike service endpoints, you don't need to configure network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
+
+For details on how to configure private endpoints, see [Enable private endpoint](elastic-san-networking.md#configure-a-private-endpoint).
+
+## Virtual network rules
+
+To further secure access to your Elastic SAN volumes, you can create virtual network rules for volume groups configured with service endpoints to allow access from specific subnets. You don't need network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
+
+Each volume group supports up to 200 virtual network rules. If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+
+Clients granted access via these network rules must also be granted the appropriate permissions to the Elastic SAN to volume group.
+
+To learn how to define network rules, see [Managing virtual network rules](elastic-san-networking.md#configure-virtual-network-rules).
+
+## Client connections
+
+After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more details on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections)
+
+> [!NOTE]
+> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+
+## Next steps
+
+[Configure Elastic SAN networking Preview](elastic-san-networking.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Title: Azure Elastic SAN networking Preview
-description: An overview of Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
+ Title: How to configure Azure Elastic SAN Preview networking
+description: How to configure networking for Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
Previously updated : 05/04/2023 Last updated : 08/17/2023 -+
-# Configure Elastic SAN networking Preview
+# Configure networking for an Elastic SAN Preview
-Azure Elastic storage area network (SAN) allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments demand, based on the type and subset of networks or resources used. When network rules are configured, only applications requesting data over the specified set of networks or through the specified set of Azure resources that can access an Elastic SAN Preview. Access to your SAN's volumes are limited to resources in subnets in the same Azure Virtual Network that your SAN's volume group is configured with.
+Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require.
-Volume groups are configured to allow access only from specific subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+This article describes how to configure your Elastic SAN to allow access from your Azure virtual network infrastructure.
-You must enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the SAN that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the Elastic SAN to access the data.
+You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets may belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Azure Active Directory tenant.
-Each volume group supports up to 200 virtual network rules.
+To configure network access to your Elastic SAN:
-> [!IMPORTANT]
-> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+> [!div class="checklist"]
+> - [Configure a virtual network endpoint](#configure-a-virtual-network-endpoint).
+> - [Configure virtual network rules](#configure-virtual-network-rules) to control the source and type of traffic to your Elastic SAN.
+> - [Configure client connections](#configure-client-connections).
+
+## Configure a virtual network endpoint
+
+You can allow access to your Elastic SAN volume groups from two types of Azure virtual network endpoints:
+
+- [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+- [Private endpoints](../../private-link/private-endpoint-overview.md)
+
+To decide which type of endpoint works best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+Each volume group can be configured to allow access from either public storage service endpoints or private endpoints, but not both at the same time. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
-## Enable Storage service endpoint
+The process for enabling each type of endpoint follows:
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+- [Configure an Azure Storage service endpoint](#configure-an-azure-storage-service-endpoint)
+- [Configure a private endpoint](#configure-a-private-endpoint)
+
+### Configure an Azure Storage service endpoint
+
+You can configure an Azure Storage service endpoint from the virtual network where access is required. You must have permission to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role to configure a service endpoint.
> [!NOTE] > Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. # [Portal](#tab/azure-portal)+ 1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
+1. Select **+ Add**.
+1. On the **Add service endpoints** screen:
+ 1. For **Service** select **Microsoft.Storage.Global** to add a [cross-region service endpoint](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints).
+
+ > [!NOTE]
+ > You might see **Microsoft.Storage** listed as an available storage service endpoint. That option is for intra-region endpoints which exist for backward compatibility only. Always use cross-region endpoints unless you have a specific reason for using intra-region ones.
+
+1. For **Subnets** select all the subnets where you want to allow access.
+1. Select **Add**.
:::image type="content" source="media/elastic-san-create/elastic-san-service-endpoint.png" alt-text="Screenshot of the virtual network service endpoint page, adding the storage service endpoint." lightbox="media/elastic-san-create/elastic-san-service-endpoint.png"::: # [PowerShell](#tab/azure-powershell)
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
+Use this sample code to create a storage service endpoint for your Elastic SAN volume group with PowerShell.
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+```powershell
+# Define some variables
+$RgName = "<ResourceGroupName>"
+$VnetName = "<VnetName>"
+$SubnetName = "<SubnetName>"
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+# Get the virtual network and subnet
+$Vnet = Get-AzVirtualNetwork -ResourceGroupName $RgName -Name $VnetName
+$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $Vnet -Name $SubnetName
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
+# Enable the storage service endpoint
+$Vnet | Set-AzVirtualNetworkSubnetConfig -Name $SubnetName -AddressPrefix $Subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` # [Azure CLI](#tab/azure-cli)
+Use this sample code to create a storage service endpoint for your Elastic SAN volume group with the Azure CLI.
+ ```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
+# Define some variables
+RgName="<ResourceGroupName>"
+VnetName="<VnetName>"
+SubnetName="<SubnetName>"
+
+# Enable the storage service endpoint
+az network vnet subnet update --resource-group $RgName --vnet-name $VnetName --name $SubnetName --service-endpoints "Microsoft.Storage.Global"
```+
-### Available virtual network regions
+### Configure a private endpoint
+
+> [!IMPORTANT]
+> - Private endpoints for Elastic SAN Preview are currently only supported in France Central.
+>
+> - Before you can create a private endpoint connection to a volume group, it must contain at least one volume.
+
+There are two steps involved in configuring a private endpoint connection:
+
+> [!div class="checklist"]
+> - Creating the endpoint and the associated connection.
+> - Approving the connection.
-Service endpoints for Azure Storage work between virtual networks and service instances in any region. They also work between virtual networks and service instances in [paired regions](../../availability-zones/cross-region-replication-azure.md) to allow continuity during a regional failover. When planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your zone-redundant SANs.
+To create a private endpoint for an Elastic SAN volume group, you must have the [Elastic SAN Volume Group Owner](../../role-based-access-control/built-in-roles.md#elastic-san-volume-group-owner) role. To approve a new private endpoint connection, you must have permission to the [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftelasticsan) `Microsoft.ElasticSan/elasticSans/PrivateEndpointConnectionsApproval/action`. Permission for this operation is included in the [Elastic SAN Network Admin](../../role-based-access-control/built-in-roles.md#elastic-san-owner) role, but it can also be granted via a custom Azure role.
+
+If you create the endpoint from a user account that has all of the necessary roles and permissions required for creation and approval, the process can be completed in one step. If not, it will require two separate steps by two different users.
+
+The Elastic SAN and the virtual network may be in different resource groups, regions and subscriptions, including subscriptions that belong to different Azure AD tenants. In these examples, we are creating the private endpoint in the same resource group as the virtual network.
+
+# [Portal](#tab/azure-portal)
+
+Currently, you can only configure a private endpoint using PowerShell or the Azure CLI.
+
+# [PowerShell](#tab/azure-powershell)
-#### Azure Storage cross-region service endpoints
+Deploying a private endpoint for an Elastic SAN Volume group using PowerShell involves these steps:
-Cross-region service endpoints for Azure became generally available in April of 2023. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
+1. Get the subnet from which applications will connect.
+1. Get the Elastic SAN Volume Group.
+1. Create a private link service connection using the volume group as input.
+1. Create the private endpoint using the subnet and the private link service connection as input.
+1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
-To use cross-region service endpoints, it might be necessary to delete existing **Microsoft.Storage** endpoints and recreate them as cross-region (**Microsoft.Storage.Global**).
+Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace all placeholder text with your own values:
-## Managing virtual network rules
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
+| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
+| `<VnetName>` | The name of the virtual network that includes the subnet. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
+| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
+| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
+| `<PrivateEndpointName>` | The name of the new private endpoint. |
+| `<Location>` | The region where the new private endpoint will be created. |
+| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
+
+```powershell
+# Set the resource group name.
+$RgName = "<ResourceGroupName>"
+
+# Get the virtual network and subnet, which is input to creating the private endpoint.
+$VnetName = "<VnetName>"
+$SubnetName = "<SubnetName>"
+
+$Vnet = Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RgName
+$Subnet = $Vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName}
+
+# Get the Elastic SAN, which is input to creating the private endpoint service connection.
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+
+$Esan = Get-AzElasticSan -Name $EsanName -ResourceGroupName $RgName
+
+# Create the private link service connection, which is input to creating the private endpoint.
+$PLSvcConnectionName = "<PrivateLinkSvcConnectionName>"
+$EsanPlSvcConn = New-AzPrivateLinkServiceConnection -Name $PLSvcConnectionName -PrivateLinkServiceId $Esan.Id -GroupId $EsanVgName
+
+# Create the private endpoint.
+$EndpointName = '<PrivateEndpointName>'
+$Location = '<Location>'
+$PeArguments = @{
+ Name = $EndpointName
+ ResourceGroupName = $RgName
+ Location = $Location
+ Subnet = $Subnet
+ PrivateLinkServiceConnection = $EsanPlSvcConn
+}
+New-AzPrivateEndpoint @PeArguments # -ByManualRequest # (Uncomment the `-ByManualRequest` parameter if you are using the two-step process).
+```
+
+Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+
+```powershell
+# Get the private endpoint and associated connection.
+$PrivateEndpoint = Get-AzPrivateEndpoint -Name $EndpointName -ResourceGroupName $RgName
+$PeConnArguments = @{
+ ServiceName = $EsanName
+ ResourceGroupName = $RgName
+ PrivateLinkResourceType = "Microsoft.ElasticSan/elasticSans"
+}
+$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments |
+Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)}
+
+# Approve the private link service connection.
+$ApprovalDesc="<ApprovalDesc>"
+Approve-AzPrivateEndpointConnection @PeConnArguments -Name $EndpointConnection.Name -Description $ApprovalDesc
+
+# Get the private endpoint connection anew and verify the connection status.
+$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments |
+Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)}
+$EndpointConnection.PrivateLinkServiceConnectionState
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Deploying a private endpoint for an Elastic SAN Volume group using the Azure CLI involves three steps:
+
+1. Get the private connection resource ID of the Elastic SAN.
+1. Create the private endpoint using inputs:
+ 1. Private connection resource ID
+ 1. Volume group name
+ 1. Resource group name
+ 1. Subnet name
+ 1. Vnet name
+1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
+
+Use this sample code to create a private endpoint for your Elastic SAN volume group with the Azure CLI. Uncomment the `--manual-request` parameter if you are using the two-step process. Replace all placeholder text with your own values:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
+| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
+| `<VnetName>` | The name of the virtual network that includes the subnet. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
+| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
+| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
+| `<PrivateEndpointName>` | The name of the new private endpoint. |
+| `<Location>` | The region where the new private endpoint will be created. |
+| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
+
+```azurecli
+# Define some variables.
+RgName="<ResourceGroupName>"
+VnetName="<VnetName>"
+SubnetName="<SubnetName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+EndpointName="<PrivateEndpointName>"
+PLSvcConnectionName="<PrivateLinkSvcConnectionName>"
+Location="<Location>"
+ApprovalDesc="<ApprovalDesc>"
+
+# Get the id of the Elastic SAN.
+id=$(az elastic-san show \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --query 'id' \
+ --output tsv)
+
+# Create the private endpoint.
+az network private-endpoint create \
+ --connection-name $PLSvcConnectionName \
+ --name $EndpointName \
+ --private-connection-resource-id $id \
+ --resource-group $RgName \
+ --vnet-name $VnetName \
+ --subnet $SubnetName \
+ --location $Location \
+ --group-id $EsanVgName # --manual-request
+
+# Verify the status of the private endpoint connection.
+PLConnectionName=$(az network private-endpoint-connection list \
+ --name $EsanName \
+ --resource-group $RgName \
+ --type Microsoft.ElasticSan/elasticSans \
+ --query "[?properties.groupIds[0]=='$EsanVgName'].name" -o tsv)
+
+az network private-endpoint-connection show \
+ --resource-name $EsanName \
+ --resource-group $RgName \
+ --type Microsoft.ElasticSan/elasticSans \
+ --name $PLConnectionName
+```
+
+Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+
+```azurecli
+az network private-endpoint-connection approve \
+ --resource-name $EsanName \
+ --resource-group $RgName \
+ --name $PLConnectionName \
+ --type Microsoft.ElasticSan/elasticSans \
+ --description $ApprovalDesc
+```
+++
+## Configure virtual network rules
You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
-> [!NOTE]
+> [!IMPORTANT]
> If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
+>
+> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
### [Portal](#tab/azure-portal)
You can manage virtual network rules for volume groups through the Azure portal,
- List virtual network rules. ```azurepowershell
- $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName
+ $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $sanName -Name $volGroupName
$Rules.NetworkAclsVirtualNetworkRule ```
You can manage virtual network rules for volume groups through the Azure portal,
- Add a network rule for a virtual network and subnet. ```azurepowershell
- $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
+ $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $Subnet.Id -Action Allow
- Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
+ Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $RgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
``` > [!TIP]
You can manage virtual network rules for volume groups through the Azure portal,
- List information from a particular volume group, including their virtual network rules. ```azurecli
- az elastic-san volume-group show -e $sanName -g $resourceGroupName -n $volumeGroupName
+ az elastic-san volume-group show -e $sanName -g $RgName -n $volumeGroupName
``` - Enable service endpoint for Azure Storage on an existing virtual network and subnet.
You can manage virtual network rules for volume groups through the Azure portal,
```azurecli # First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
- virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
+ virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $RgName --query 'length(networkAcls.virtualNetworkRules)'
- az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+ az elastic-san volume-group update -e $sanName -g $RgName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/default, action:Allow}]}"
``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like. ```azurecli
- az elastic-san volume-group update -e $sanName -g $resourceGroupName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
+ az elastic-san volume-group update -e $sanName -g $RgName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
```
+## Configure client connections
+
+After you have enabled the desired endpoints and granted access in your network rules, you are ready to configure your clients to connect to the appropriate Elastic SAN volumes.
+
+> [!NOTE]
+> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+ ## Next steps
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+- [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md)
+- [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md)
+- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Understand planning for an Azure Elastic SAN deployment. Learn abou
Previously updated : 05/02/2023 Last updated : 06/09/2023
Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Sa
## Networking
-In Preview, Elastic SAN supports public access from selected virtual networks, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. You must enable [service endpoint for Azure Storage](../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
+In the Elastic SAN Preview, you can configure access to volume groups over both public [Azure Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
-If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+To allow network access, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.
## Redundancy
The following iSCSI features aren't currently supported:
For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
+[Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md)
[Deploy an Elastic SAN Preview](elastic-san-create.md)
storage Elastic San Shared Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-shared-volumes.md
+
+ Title: Use clustered applications on Azure Elastic SAN
+description: Learn more about using clustered applications on an Elastic SAN volume and sharing volumes between compute clients.
+++ Last updated : 08/15/2023++++
+# Use clustered applications on Azure Elastic SAN
+
+Azure Elastic SAN volumes can be simultaneously attached to multiple compute clients, allowing you to deploy or migrate cluster applications to Azure. You need to use a cluster manager to share an Elastic SAN volume, like Windows Server Failover Cluster (WSFC), or Pacemaker. The cluster manager handles cluster node communications and write locking. Elastic SAN doesn't natively offer a fully managed filesystem that can be accessed over SMB or NFS.
+
+When used as a shared volume, elastic SAN volumes can be shared across availability zones or regions. If you share a volume across availability zones, you should select [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) when deploying your SAN. Sharing a volume in a local-redundant storage SAN across zones reduces your performance due to increased latency between the volume and clients.
+
+## Limitations
+
+- Volumes in an Elastic SAN using [ZRS](elastic-san-planning.md#redundancy) can't be used as shared volumes.
+- Elastic SAN connection scripts can be used to attach shared volumes to virtual machines in Virtual Machine Scale Sets or virtual machines in Availability Sets. Fault domain alignment isn't supported.
+- The maximum number of sessions a shared volume supports is 128.
+ - An individual client can create multiple sessions to an individual volume for increased performance. For example, if you create 32 sessions on each of your clients, only four clients could connect to a single volume.
+
+See [Support for Azure Storage features](elastic-san-introduction.md#support-for-azure-storage-features) for other limitations of Elastic SAN.
+
+## Regional availability
+
+Currently, only Elastic SAN volumes in France Central can be used as shared volumes.
+
+## How it works
+
+Elastic SAN shared volumes use [SCSI-3 Persistent Reservations](https://www.t10.org/members/w_spc3.htm) to allow initiators (clients) to control access to a shared elastic SAN volume. This protocol enables an initiator to reserve access to an elastic SAN volume, limit write (or read) access by other initiators, and persist the reservation on a volume beyond the lifetime of a session by default.
+
+SCSI-3 PR has a pivotal role in maintaining data consistency and integrity within shared volumes in cluster scenarios. Compute nodes in a cluster can read or write to their attached elastic SAN volumes based on the reservation chosen by their cluster applications.
+
+## Persistent reservation flow
+
+The following diagram illustrates a sample 2-node clustered database application that uses SCSI-3 PR to enable failover from one node to the other.
++
+The flow is as follows:
+
+1. The clustered application running on both Azure VM1 and VM2 registers its intent to read or write to the elastic SAN volume.
+1. The application instance on VM1 then takes an exclusive reservation to write to the volume.
+1. This reservation is enforced on your volume and the database can now exclusively write to the volume. Any writes from the application instance on VM2 fail.
+1. If the application instance on VM1 goes down, the instance on VM2 can initiate a database failover and take over control of the volume.
+1. This reservation is now enforced on the volume, and it won't accept writes from VM1. It only accepts writes from VM2.
+1. The clustered application can complete the database failover and serve requests from VM2.
+
+The following diagram illustrates another common clustered workload consisting of multiple nodes reading data from an elastic SAN volume for running parallel processes, such as training of machine learning models.
++
+The flow is as follows:
+1. The clustered application running on all VMs registers its intent to read or write to the elastic SAN volume.
+1. The application instance on VM1 takes an exclusive reservation to write to the volume while opening up reads to the volume from other VMs.
+1. This reservation is enforced on the volume.
+1. All nodes in the cluster can now read from the volume. Only one node writes back results to the volume, on behalf of all nodes in the cluster.
+
+## Supported SCSI PR commands
+
+The following commands are supported with Elastic SAN volumes:
+
+To interact with the volume, start with the appropriate persistent reservation action:
+- PR_REGISTER_KEY
+- PR_REGISTER_AND_IGNORE
+- PR_GET_CONFIGURATION
+- PR_RESERVE
+- PR_PREEMPT_RESERVATION
+- PR_CLEAR_RESERVATION
+- PR_RELEASE_RESERVATION
+
+When using PR_RESERVE, PR_PREEMPT_RESERVATION, or PR_RELEASE_RESERVATION, provide one of the following persistent reservation type:
+- PR_NONE
+- PR_WRITE_EXCLUSIVE
+- PR_EXCLUSIVE_ACCESS
+- PR_WRITE_EXCLUSIVE_REGISTRANTS_ONLY
+- PR_EXCLUSIVE_ACCESS_REGISTRANTS_ONLY
+- PR_WRITE_EXCLUSIVE_ALL_REGISTRANTS
+- PR_EXCLUSIVE_ACCESS_ALL_REGISTRANTS
+
+Persistent reservation type determines access to the volume from each node in the cluster.
+
+|Persistent Reservation Type |Reservation Holder |Registered |Others |
+|||||
+|NO RESERVATION |N/A |Read-Write |Read-Write |
+|WRITE EXCLUSIVE |Read-Write |Read-Only |Read-Only |
+|EXCLUSIVE ACCESS |Read-Write |No Access |No Access |
+|WRITE EXCLUSIVE - REGISTRANTS ONLY |Read-Write |Read-Write |Read-Only |
+|EXCLUSIVE ACCESS - REGISTRANTS ONLY |Read-Write |Read-Write |No Access |
+|WRITE EXCLUSIVE - ALL REGISTRANTS |Read-Write |Read-Write |Read-Only |
+|EXCLUSIVE ACCESS - ALL REGISTRANTS |Read-Write |Read-Write |No Access |
+
+You also need to provide a persistent-reservation-key when using:
+- PR_RESERVE
+- PR_REGISTER_AND_IGNORE
+- PR_REGISTER_KEY
+- PR_PREEMPT_RESERVATION
+- PR_CLEAR_RESERVATION
+- PR_RELEASE-RESERVATION.
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
description: Learn how to monitor the performance and availability of Azure File
-+ Last updated 08/07/2023 ms.devlang: csharp
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
Previously updated : 05/24/2022 Last updated : 08/15/2023 # Capture data from Event Hubs in Parquet format-
-This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in Parquet format. You have the flexibility of specifying a time or size interval.
+This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the Parquet format.
## Prerequisites -- Your Azure Event Hubs and Azure Data Lake Storage Gen2 resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.-- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+- An Azure Event Hubs namespace with an event hub and an Azure Data Lake Storage Gen2 account with a container to store the captured data. These resources must be publicly accessible and can't be behind a firewall or secured in an Azure virtual network.
+
+ If you don't have an event hub, create one by following instructions from [Quickstart: Create an event hub](../event-hubs/event-hubs-create.md).
+
+ If you don't have a Data Lake Storage Gen2 account, create one by following instructions from [Create a storage account](../storage/blobs/create-data-lake-storage-account.md)
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format. For testing purposes, select **Generate data (preview)** on the left menu, select **Stocks data** for dataset, and then select **Send**.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/stocks-data.png" alt-text="Screenshot showing the Generate data page to generate sample stocks data." lightbox="./media/capture-event-hub-data-parquet/stocks-data.png":::
## Configure a job to capture data Use the following steps to configure a Stream Analytics job to capture data in Azure Data Lake Storage Gen2. 1. In the Azure portal, navigate to your event hub.
-1. Select **Features** > **Process Data**, and select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+1. On the left menu, select **Process Data** under **Features**. Then, select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" alt-text="Screenshot showing the Process Event Hubs data start cards." lightbox="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" :::
-1. Enter a **name** to identify your Stream Analytics job. Select **Create**.
- :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" :::
-1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job will use to connect to Event Hubs. Then select **Connect**.
+1. Enter a **name** for your Stream Analytics job, and then select **Create**.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." :::
+1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job uses to connect to Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/capture-event-hub-data-parquet/event-hub-configuration.png" :::
-1. When the connection is established successfully, you'll see:
+1. When the connection is established successfully, you see:
- Fields that are present in the input data. You can choose **Add field** or you can select the three dot symbol next to a field to optionally remove, rename, or change its name. - A live sample of incoming data in the **Data preview** table under the diagram view. It refreshes periodically. You can select **Pause streaming preview** to view a static view of the sample input.
+
:::image type="content" source="./media/capture-event-hub-data-parquet/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-parquet/edit-fields.png" ::: 1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration. 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps: 1. Select the subscription, storage account name and container from the drop-down menu. 1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in.
+ 1. Select **Parquet** for **Serialization** format.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/job-top-settings.png" alt-text="Screenshot showing the Data Lake Storage Gen2 configuration page." lightbox="./media/capture-event-hub-data-parquet/job-top-settings.png":::
1. For streaming blobs, the directory path pattern is expected to be a dynamic value. It's required for the date to be a part of the file path for the blob ΓÇô referenced as `{date}`. To learn about custom path patterns, see to [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md).
+
:::image type="content" source="./media/capture-event-hub-data-parquet/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-parquet/blob-configuration.png" ::: 1. Select **Connect**
-1. When the connection is established, you'll see fields that are present in the output data.
+1. When the connection is established, you see fields that are present in the output data.
1. Select **Save** on the command bar to save your configuration.+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/save-configuration.png" alt-text="Screenshot showing the Save button selected on the command bar." :::
1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window: 1. Choose the output start time.
+ 1. Select the pricing plan.
1. Select the number of Streaming Units (SU) that the job runs with. SU represents the computing resources that are allocated to execute a Stream Analytics job. For more information, see [Streaming Units in Azure Stream Analytics](stream-analytics-streaming-unit-consumption.md).
- 1. In the **Choose Output data error handling** list, select the behavior you want when the output of the job fails due to data error. Select **Retry** to have the job retry until it writes successfully or select another option.
+
:::image type="content" source="./media/capture-event-hub-data-parquet/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-parquet/start-job.png" :::
+1. You should see the Stream Analytic job in the **Stream Analytics job** tab of the **Process data** page for your event hub.
-## Verify output
-Verify that the Parquet files are generated in the Azure Data Lake Storage container.
-
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" alt-text="Screenshot showing the Stream Analytics job on the Process data page." lightbox="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" :::
+
+## Verify output
-The new job is shown on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
+1. On the Event Hubs instance page for your event hub, select **Generate data**, select **Stocks data** for dataset, and then select **Send** to send some sample data to the event hub.
+1. Verify that the Parquet files are generated in the Azure Data Lake Storage container.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the ADLS container." lightbox="./media/capture-event-hub-data-parquet/verify-captured-data.png" :::
+1. Select **Process data** on the left menu. Switch to the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
-Here's an example screenshot of metrics showing input and output events.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/open-metrics-link.png" alt-text="Screenshot showing Open Metrics link selected." lightbox="./media/capture-event-hub-data-parquet/open-metrics-link.png" :::
+
+ Here's an example screenshot of metrics showing input and output events.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/job-metrics.png" alt-text="Screenshot showing metrics of the Stream Analytics job." lightbox="./media/capture-event-hub-data-parquet/job-metrics.png" :::
## Next steps
stream-analytics No Code Transform Filter Ingest Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md
Previously updated : 06/07/2022 Last updated : 06/13/2023 # Use Azure Stream Analytics no-code editor to transform and store data in Azure SQL database
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md
Previously updated : 05/30/2021 Last updated : 08/16/2023 # Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI
-[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it is no longer required for a user to interactively log in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you will not need to periodically reauthorize the job.
+[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it's no longer required for a user to interactively sign in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you won't need to periodically reauthorize the job.
This article shows you how to enable Managed Identity for the Power BI output(s) of a Stream Analytics job through the Azure portal and through an Azure Resource Manager deployment.
+> [!NOTE]
+> Only **system-assigned** managed identities are supported with the Power BI output. Currently, using user-assigned managed identities with the Power BI output isn't supported.
+ ## Prerequisites
-The following are required for using this feature:
+You must have the following prerequisites before you use this feature:
- A Power BI account with a [Pro license](/power-bi/service-admin-purchasing-power-bi-pro).--- An upgraded workspace within your Power BI account. See [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/) of this feature for more details.
+- An upgraded workspace within your Power BI account. For more information, see [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/).
## Create a Stream Analytics job using the Azure portal
-1. Create a new Stream Analytics job or open an existing job in the Azure portal. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Configure**. Ensure that "Use System-assigned Managed Identity" is selected and then select the **Save** button on the bottom of the screen.
+1. Create a new Stream Analytics job or open an existing job in the Azure portal.
+1. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Settings**.
- ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity.png)
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png" alt-text="Screenshot showing the Managed Identity page with Select identity button selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png":::
+1. On the **Select identity** page, select **System assigned identity***. If you select the latter option, specify the managed identity you want to use. Then, select **Save**.
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png" alt-text="Screenshot showing the Select identity page with System assigned identity selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png":::
+1. On the **Managed identity** page, confirm that you see the **Principal ID** and **Principal name** assigned to your Stream Analytics job. The principal name should be same as your Stream Analytics job name.
2. Before configuring the output, give the Stream Analytics job access to your Power BI workspace by following the directions in the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.
+3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and sign in with your Power BI account.
-3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and log in with your Power BI account.
-
- ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png)
+ [ ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png#lightbox)
4. Once authorized, a dropdown list will be populated with all of the workspaces you have access to. Select the workspace that you authorized in the previous step. Then select **Managed Identity** as the "Authentication mode". Finally, select the **Save** button.
- ![Configure Power BI output with Managed Identity](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png)
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png" alt-text="Screenshot showing the Power BI output configuration with Managed identity authentication mode selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png":::
## Azure Resource Manager deployment
Azure Resource Manager allows you to fully automate the deployment of your Strea
} ```
- If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned "principalId".
+ If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned `principalId`.
3. Now that the job is created, continue to the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.
Now that the Stream Analytics job has been created, it can be given access to a
### Use the Power BI UI > [!Note]
- > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. See [Get started with a service principal](/power-bi/developer/embed-service-principal) for more details.
+ > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. For more information, see [Get started with a service principal](/power-bi/developer/embed-service-principal).
-1. Navigate to the workspace's access settings. See this article for more details: [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace).
+1. Navigate to the workspace's access settings. For more information, see [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace).
2. Type the name of your Stream Analytics job in the text box and select **Contributor** as the access level. 3. Select **Add** and close the pane.
- ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png)
+ [ ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png#lightbox)
### Use the Power BI PowerShell cmdlets
Now that the Stream Analytics job has been created, it can be given access to a
> [!Important] > Please ensure you are using version 1.0.821 or later of the cmdlets.
-```powershell
-Install-Module -Name MicrosoftPowerBIMgmt
-```
-
-2. Log in to Power BI.
-
-```powershell
-Login-PowerBI
-```
+ ```powershell
+ Install-Module -Name MicrosoftPowerBIMgmt
+ ```
+2. Sign in to Power BI.
+ ```powershell
+ Login-PowerBI
+ ```
3. Add your Stream Analytics job as a Contributor to the workspace.
-```powershell
-Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor
-```
+ ```powershell
+ Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor
+ ```
### Use the Power BI REST API
Request Body
### Use a Service Principal to grant permission for an ASA job's Managed Identity
-For automated deployments, using an interactive login to give an ASA job access to a Power BI workspace is not possible. This can be done be using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell:
+For automated deployments, using an interactive sign-in to give an ASA job access to a Power BI workspace isn't possible. It can be done using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell:
```powershell Connect-PowerBIServiceAccount -ServicePrincipal -TenantId "<tenant-id>" -CertificateThumbprint "<thumbprint>" -ApplicationId "<app-id>"
Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -Pr
## Remove Managed Identity
-The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There is no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
+The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There's no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
## Limitations Below are the limitations of this feature: -- Classic Power BI workspaces are not supported.
+- Classic Power BI workspaces aren't supported.
- Azure accounts without Azure Active Directory. -- Multi-tenant access is not supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and cannot be used with a resource that resides in a different Azure Active Directory tenant.
+- Multi-tenant access isn't supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and can't be used with a resource that resides in a different Azure Active Directory tenant.
-- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) is not supported. This means you are not able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.
+- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) isn't supported. This means you aren't able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.
## Next steps
stream-analytics Sql Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-reference-data.md
Use the following steps to add Azure SQL Database as a reference input source us
1. Create a Stream Analytics job.
-2. Create a storage account to be used by the Stream Analytics job.
+2. Create a storage account to be used by the Stream Analytics job.
+ > [!IMPORTANT]
+ > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job.
-3. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job.
+4. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job.
### Define SQL Database reference data input
Use the following steps to add Azure SQL Database as a reference input source us
2. Become familiar with the [Stream Analytics tools for Visual Studio](stream-analytics-quick-create-vs.md) quickstart. 3. Create a storage account.
+ > [!IMPORTANT]
+ > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job.
### Create a SQL Database table
stream-analytics Stream Analytics Get Started With Azure Stream Analytics To Process Data From Iot Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md
Previously updated : 03/23/2022 Last updated : 08/15/2023 # Process real-time IoT data streams with Azure Stream Analytics
In this article, you learn how to create stream-processing logic to gather data
## Scenario
-Contoso, which is a company in the industrial automation space, has completely automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.
+Contoso, a company in the industrial automation space, has automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.
-In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format and looks like the following:
+In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format as shown in the following sample snippet:
```json {
In this example, the data is generated from a Texas Instruments sensor tag devic
} ```
-In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or Iot Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
+In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or IoT Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
-For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you will learn how to connect your job to inputs and outputs and deploy them to the Azure service.
+For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you learn how to connect your job to inputs and outputs and deploy them to the Azure service.
## Create a Stream Analytics job
-1. In the [Azure portal](https://portal.azure.com), select **+ Create a resource** from the left navigation menu. Then, select **Stream Analytics job** from **Analytics**.
+1. Navigate to the [Azure portal](https://portal.azure.com).
+1. On the left navigation menu, select **All services**, select **Analytics**, hover the mouse over **Stream Analytics jobs**, and then select **Create**.
- ![Create a new Stream Analytics job](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png)
-
-1. Enter a unique job name and verify the subscription is the correct one for your job. Create a new resource group or select an existing one from your subscription.
-
-1. Select a location for your job. Use the same location for your resource group and all resources to increased processing speed and reduced of costs. After you've made the configurations, select **Create**.
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png" alt-text="Screenshot that shows the selection of Create button for a Stream Analytics job." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png":::
+1. On the **New Stream Analytics job** page, follow these steps:
+ 1. For **Subscription**, select your **Azure subscription**.
+ 1. For **Resource group**, select an existing resource group or create a resource group.
+ 1. For **Name**, enter a unique name for the Stream Analytics job.
+ 1. Select the **Region** in which you want to deploy the Stream Analytics job. Use the same location for your resource group and all resources to increase the processing speed and reduce costs.
+ 1. Select **Review + create**.
- ![Create a new Stream Analytics job details](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png" alt-text="Screenshot that shows the New Stream Analytics job page.":::
+1. On the **Review + create** page, review settings, and select **Create**.
+1. After the deployment succeeds, select **Go to resource** to navigate to the **Stream Analytics job** page for your Stream Analytics job.
## Create an Azure Stream Analytics query
-The next step after your job is created is to write a query. You can test queries against sample data without connecting an input or output to your job.
-
-Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json
-) from GitHub. Then, navigate to your Azure Stream Analytics job in the Azure portal.
-
-Select **Query** under **Job topology** from the left menu. Then select **Upload sample input**. Upload the `HelloWorldASA-InputStream.json` file, and select **Ok**.
+After your job is created, write a query. You can test queries against sample data without connecting an input or output to your job.
-![Stream Analytics dashboard query tile](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png)
+1. Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json
+) from GitHub.
+1. On the **Azure Stream Analytics job** page in the Azure portal, select **Query** under **Job topology** from the left menu.
+1. Select **Upload sample input**, select the `HelloWorldASA-InputStream.json` file you downloaded, and select **OK**.
-Notice that a preview of the data is automatically populated in the **Input preview** table.
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png" alt-text="Screenshot that shows the **Query** page with **Upload sample input** selected." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png":::
+1. Notice that a preview of the data is automatically populated in the **Input preview** table.
-![Preview of sample input data](./media/stream-analytics-get-started-with-iot-devices/input-preview.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/input-preview.png" alt-text="Screenshot that shows sample input data in the Input preview tab.":::
### Query: Archive your raw data The simplest form of query is a pass-through query that archives all input data to its designated output. This query is the default query populated in a new Azure Stream Analytics job.
-```sql
-SELECT
- *
-INTO
- Output
-FROM
- InputStream
-```
+1. In the **Query** window, enter the following query, and then select **Test query** on the toolbar.
-Select **Test query** and view the results in the **Test results** table.
+ ```sql
+ SELECT
+ *
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias
+ ```
+2. View the results in the **Test results** tab in the bottom pane.
-![Test results for Stream Analytics query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png" alt-text="Screenshot that shows the sample query and its results.":::
### Query: Filter the data based on a condition
-Let's try to filter the results based on a condition. We would like to show results for only those events that come from "sensorA."
-
-```sql
-SELECT
- time,
- dspl AS SensorName,
- temp AS Temperature,
- hmdt AS Humidity
-INTO
- Output
-FROM
- InputStream
-WHERE dspl='sensorA'
-```
+Let's update the query to filter the results based on a condition. For example, the following query shows events that come from `sensorA`."
+
+1. Update the query with the following sample:
-Paste the query in the editor and select **Test query** to review the results.
+ ```sql
+ SELECT
+ time,
+ dspl AS SensorName,
+ temp AS Temperature,
+ hmdt AS Humidity
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias
+ WHERE dspl='sensorA'
+ ```
+2. Select **Test query** to see the results of the query.
-![Filtering a data stream](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png" alt-text="Screenshot that shows the query results with the filter.":::
### Query: Alert to trigger a business workflow Let's make our query more detailed. For every type of sensor, we want to monitor average temperature per 30-second window and display results only if the average temperature is above 100 degrees.
-```sql
-SELECT
- System.Timestamp AS OutputTime,
- dspl AS SensorName,
- Avg(temp) AS AvgTemperature
-INTO
- Output
-FROM
- InputStream TIMESTAMP BY time
-GROUP BY TumblingWindow(second,30),dspl
-HAVING Avg(temp)>100
-```
+1. Update the query to:
+
+ ```sql
+ SELECT
+ System.Timestamp AS OutputTime,
+ dspl AS SensorName,
+ Avg(temp) AS AvgTemperature
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias TIMESTAMP BY time
+ GROUP BY TumblingWindow(second,30),dspl
+ HAVING Avg(temp)>100
+ ```
+1. Select **Test query** to see the results of the query.
-![30-second filter query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png" alt-text="Screenshot that shows the query with a tumbling window.":::
-You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics).
+ You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics).
### Query: Detect absence of events
-How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then did not send events for the next 5 seconds.
-
-```sql
-SELECT
- t1.time,
- t1.dspl AS SensorName
-INTO
- Output
-FROM
- InputStream t1 TIMESTAMP BY time
-LEFT OUTER JOIN InputStream t2 TIMESTAMP BY time
-ON
- t1.dspl=t2.dspl AND
- DATEDIFF(second,t1,t2) BETWEEN 1 and 5
-WHERE t2.dspl IS NULL
-```
+How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then didn't send events for the next 5 seconds.
+
+1. Update the query to:
+
+ ```sql
+ SELECT
+ t1.time,
+ t1.dspl AS SensorName
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias t1 TIMESTAMP BY time
+ LEFT OUTER JOIN yourinputalias t2 TIMESTAMP BY time
+ ON
+ t1.dspl=t2.dspl AND
+ DATEDIFF(second,t1,t2) BETWEEN 1 and 5
+ WHERE t2.dspl IS NULL
+ ```
+2. Select **Test query** to see the results of the query.
+
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png" alt-text="Screenshot that shows the query that detects absence of events.":::
-![Detect absence of events](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png)
-Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is very useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics).
+ Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics).
## Conclusion
-The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this is just to get you started. Stream Analytics supports a variety of inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
+The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this article is just to get you started. Stream Analytics supports various inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
Previously updated : 09/29/2022 Last updated : 08/15/2023 # User-assigned managed identities for Azure Stream Analytics
With support for both system-assigned identity and user-assigned identity, here
2. You can switch from an existing user-assigned identity to a newly created user-assigned identity. The previous identity is not removed from storage access control list. 3. You cannot add multiple identities to your stream analytics job. 4. Currently we do not support deleting an identity from a stream analytics job. You can replace it with another user-assigned or system-assigned identity.
+5. You cannot use user-assigned identity to authenticate via allow-trusted services.
## Next steps
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-apache-spark-notebook.md
To ensure the Spark instance is shut down, end any connected sessions(notebooks)
In this quickstart, you learned how to create a serverless Apache Spark pool and run a basic Spark SQL query. - [Azure Synapse Analytics](overview-what-is.md)-- [.NET for Apache Spark documentation](/dotnet/spark)
+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet)
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
The following table describes the built-in roles and the scopes at which they ca
|Synapse Administrator |Full Synapse access to SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential. Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. </br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential | |Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks and their outputs, and to libraries, linked services, and credentials.  Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| |Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services.  Includes read access to all other published code artifacts.  Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace|
-|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
-|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace
+|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including scheduled pipelines, credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
+|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs, including scheduled pipelines. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace
|Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime| |Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace |
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
You can pause or scale a dedicated SQL pool, configure a Spark pool, or an integ
With access to Synapse Studio, you can create new code artifacts, such as SQL scripts, KQL scripts, notebooks, spark jobs, linked services, pipelines, dataflows, triggers, and credentials. These artifacts can be published or saved with additional permissions.
-If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts.
+If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts, including scheduled pipelines.
### Execute your code
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
We provide rich operations to develop notebooks:
+ [Collapse a cell output](#collapse-a-cell-output) + [Notebook outline](#notebook-outline)
+> [!NOTE]
+>
+> In the notebooks, there is a SparkSession automatically created for you, stored in a variable called `spark`. Also there is a variable for SparkContext which is called `sc`. Users can access these variables directly and should not change the values of these variables.
++ <h3 id="add-a-cell">Add a cell</h3> There are multiple ways to add a new cell to your notebook.
Select the **Undo** / **Redo** button or press **Z** / **Shift+Z** to revoke the
![Screenshot of Synapse undo cells of aznb](./media/apache-spark-development-using-notebooks/synapse-undo-cells-aznb.png) Supported undo cell operations:
-+ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content will be kept along with the cell.
++ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content is kept along with the cell. + Reorder cell. + Toggle parameter. + Convert between Code cell and Markdown cell.
Select the **Cancel All** button to cancel the running cells or cells waiting in
### Notebook reference
-You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You will receive an exception if the statement depth is larger than **five**.
+You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You receive an exception if the statement depth is larger than **five**.
Example: ``` %run /<path>/Notebook1 { "parameterInt": 1, "parameterFloat": 2.5, "parameterBool": true, "parameterString": "abc" } ```.
Notebook reference works in both interactive mode and Synapse pipeline.
### Variable explorer
-Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables will show up automatically as they are defined in the code cells. Clicking on each column header will sort the variables in the table.
+Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables show up automatically as they are defined in the code cells. Clicking on each column header sorts the variables in the table.
You can select the **Variables** button on the notebook command bar to open or hide the variable explorer.
Parameterized session configuration allows you to replace the value in %%configu
} ```
-Notebook will use default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
+Notebook uses default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
During the pipeline run mode, you can configure pipeline Notebook activity settings as below: ![Screenshot of parameterized session configuration](./media/apache-spark-development-using-notebooks/parameterized-session-config.png)
You can access data in the primary storage account directly. There's no need to
## IPython Widgets
-Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (e.g. Scala, SQL, C#) yet.
+Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (for example, Scala, SQL, C#) yet.
### To use IPython Widget 1. You need to import `ipywidgets` module first to use the Jupyter Widget framework.
Widgets are eventful Python objects that have a representation in the browser, o
slider ```
-3. Run the cell, the widget will display at the output area.
+3. Run the cell, the widget displays at the output area.
![Screenshot of ipython widgets slider](./media/apache-spark-development-using-notebooks/ipython-widgets-slider.png)
-4. You can use multiple `display()` calls to render the same widget instance multiple times, but they will remain in sync with each other.
+4. You can use multiple `display()` calls to render the same widget instance multiple times, but they remain in sync with each other.
```python slider = widgets.IntSlider()
Widgets are eventful Python objects that have a representation in the browser, o
|`widgets.jslink()`|You can use `widgets.link()` function to link two similar widgets.| |`FileUpload` widget| Not support yet.|
-2. Global `display` function provided by Synapse does not support displaying multiple widgets in 1 call (i.e. `display(a, b)`), which is different from IPython `display` function.
+2. Global `display` function provided by Synapse does not support displaying multiple widgets in one call (that is, `display(a, b)`), which is different from IPython `display` function.
3. If you close a notebook that contains IPython Widget, you will not be able to see or interact with it until you execute the corresponding cell again.
Available cell magics:
<h2 id="reference-unpublished-notebook">Reference unpublished notebook</h2>
-Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run will fetch the current content in web cache, if you run a cell including a reference notebooks statement, you will reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
+Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run fetches the current content in web cache, if you run a cell including a reference notebooks statement, you reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
You can enable Reference unpublished notebook from Properties panel:
You can reuse your notebook sessions conveniently now without having to start ne
![Screenshot of notebook-manage-sessions](./media/apache-spark-development-using-notebooks/synapse-notebook-manage-sessions.png)
-In the **Active sessions** list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session will be detached from the previous notebook (if it's not idle) then attach to the current one.
+In the **Active sessions**, list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session is detached from the previous notebook (if it's not idle) then attach to the current one.
![Screenshot of notebook-sessions-list](./media/apache-spark-development-using-notebooks/synapse-notebook-sessions-list.png)
To parameterize your notebook, select the ellipses (...) to access the **more co
-Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
+Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine adds a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
### Assign parameters values from a pipeline
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
While Azure Synapse Analytics supports a variety of linked service connections (
- Azure SQL Data Warehouse (Dedicated and Serverless) - Azure Storage
- #### mssparkutils.credenials.getToken()
+ #### mssparkutils.credentials.getToken()
When you need an OAuth bearer token to access services directly, you can use the `getToken` method. The following resources are supported: | Service Name | String literal to be used in API call |
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
To query a file located in Azure Storage, your serverless SQL pool endpoint need
To grant the ability manage credentials: -- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to the user. For example:
+- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to its login in the master database. For example:
```sql
- GRANT ALTER ANY CREDENTIAL TO [user_name];
+ GRANT ALTER ANY CREDENTIAL TO [login_name];
``` -- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the user. For example:
+- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the database user in the user database. For example:
```sql GRANT CONTROL ON DATABASE::[database_name] TO [user_name];
To grant the ability manage credentials:
Database users who access external storage must have permission to use credentials. To use the credential, a user must have the `REFERENCES` permission on a specific credential.
-To grant the `REFERENCES` permission on a server-level credential for a user, use the following T-SQL query:
+To grant the `REFERENCES` permission on a server-level credential for a login, use the following T-SQL query in the master database:
```sql
-GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [user];
+GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [login_name];
```
-To grant a `REFERENCES` permission on a database-scoped credential for a user, use the following T-SQL query:
+To grant a `REFERENCES` permission on a database-scoped credential for a database user, use the following T-SQL query in the user database:
```sql
-GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user];
+GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user_name];
``` ## Server-level credential
These articles help you learn how query different folder types, file types, and
- [Query Parquet files](query-parquet-files.md) - [Create and use views](create-use-views.md) - [Query JSON files](query-json-files.md)-- [Query Parquet nested types](query-parquet-nested-types.md)
+- [Query Parquet nested types](query-parquet-nested-types.md)
update-center Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md
Title: Assessment options in update management center (preview).
-description: The article describes the assessment options available in Update management center (preview).
-
+ Title: Assessment options in Update Manager (preview).
+description: The article describes the assessment options available in Update Manager (preview).
+ Last updated 05/23/2023
-# Assessment options in update management center (preview)
+# Assessment options in Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article provides an overview of the assessment options available by update management center (preview).
+This article provides an overview of the assessment options available by Update Manager (preview).
-Update management center (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines.
+Update Manager (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines.
## Periodic assessment
- Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by update management center (preview). We recommend that you enable this property on your machines as it allows update management center (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
+ Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager (preview). We recommend that you enable this property on your machines as it allows Update Manager (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
:::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png"::: ## Check for updates now/On-demand assessment
-Update management center (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from update management center (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md).
+Update Manager (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md).
## Update assessment scan You can initiate a software updates compliance scan on a machine to get a current list of operating system updates available.
In the **Scheduling** section, you can either **create a maintenance configurati
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/configure-wu-agent.md
Title: Configure Windows Update settings in Update management center (Preview)
-description: This article tells how to configure Windows update settings to work with Update management center (Preview).
-
+ Title: Configure Windows Update settings in Azure Update Manager (preview)
+description: This article tells how to configure Windows update settings to work with Azure Update Manager (preview).
+ Last updated 05/02/2023
-# Configure Windows update settings for update management center (preview)
+# Configure Windows update settings for Azure Update Manager (preview)
-Update management center (Preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by:
+Azure Update Manager (preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by:
- Local Group Policy Editor - Group Policy - PowerShell - Directly editing the Registry
-The Update management center (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update management center (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window.
+The Update Manager (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update Manager (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window.
For additional recommendations on setting up WSUS in your Azure subscription and to secure your Windows virtual machines up to date, review [Plan your deployment for updating Windows virtual machines in Azure using WSUS](/azure/architecture/example-scenario/wsus). ## Pre-download updates
-To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, update management center (Preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in update management center (preview).
+To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, Update Manager (preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in Update Manager (preview)
You can enable this setting in PowerShell:
By default, the Windows Update client is configured to provide updates only for
Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
+- For Servers configured to patch on a schedule from Update Manager (preview) (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
Use one of the following options to perform the settings change at scale:
$ServiceManager.AddService2($ServiceId,7,"") ``` -- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+- For servers running Server 2016 or later which are not using Update Manager (preview) scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
## Make WSUS configuration settings
-Update management center (Preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS.
+Update Manager (preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS.
To restrict machines to the internal update service, see [do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#do-not-connect-to-any-windows-update-internet-locations).
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
Title: Deploy updates and track results in update management center (preview).
-description: The article details how to use update management center (preview) in the Azure portal to deploy updates and view results for supported machines.
-
+ Title: Deploy updates and track results in Azure Update Manager (preview).
+description: The article details how to use Azure Update Manager (preview) in the Azure portal to deploy updates and view results for supported machines.
+ Last updated 08/08/2023
-# Deploy updates now and track results with update management center (preview)
+# Deploy updates now and track results with Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The article describes how to perform an on-demand update on a single VM or multiple VMs using update management center (preview).
+The article describes how to perform an on-demand update on a single VM or multiple VMs using Update Manager (preview).
See the following sections for detailed information: - [Install updates on a single VM](#install-updates-on-single-vm)
See the following sections for detailed information:
## Supported regions
-Update management center (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
+Update Manager (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
+
+## Configure reboot settings
+
+The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
## Install updates on single VM >[!NOTE]
-> You can install the updates from the Overview or Machines blade in update management center (preview) page or from the selected VM.
+> You can install the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/install-single-overview)
To install one time updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates.
+1. In **Update Manager (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates.
:::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
To install one time updates on a single VM, follow these steps:
- In **Select resources**, choose the machine and select **Add**.
-1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, its necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
+1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, it's necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
> [!NOTE]
- > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in update center management (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
- > - Update management center (preview) doesn't support driver updates.
+ > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in Update Manager (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
+ > - Update Manager (preview) doesn't support driver updates.
- Select **+Include update classification**, in the **Include update classification** select the appropriate classification(s) that must be installed on your machines. :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot on including update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png":::
- - Select **Include KB ID/package** to include in the updates. Enter a comma-separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update management center (preview) shows a preview of OS updates under the **Selected Updates** section.
+ - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update Manager (preview) shows a preview of OS updates under the **Selected Updates** section.
- To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend checking this option because updates that are not displayed here might be installed, as newer updates might be available.
- - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date , choose the date and select **Add** and **Next**.
+ - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date, choose the date and select **Add** and **Next**.
:::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot on including patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png":::
To install one time updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates.
+1. In **Update Manager (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates.
1. Select to **Install now** to proceed with installing updates.
To install one time updates on a single VM, follow these steps:
1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates**, select **Go to Updates using Update Center**.
+1. In **Updates**, select **Go to Updates using Azure Update Manager**.
1. In **Updates (Preview)**, select **One-time update** to install the updates. 1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm).
You can schedule updates
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates.
+1. In **Update Manager (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates.
:::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png":::
A notification appears to inform you the activity has started and another is cre
You can browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history).
-After your scheduled deployment starts, you can see it's status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
+After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
:::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot showing updates history." lightbox="./media/deploy-updates/updates-history-expanded.png"::: > [!NOTE]
-> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update management center (preview)** > **Manage** > **History**.
+> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update manager (preview)** > **Manage** > **History**.
A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid.
Select any one of the update deployments from the list to open the **Update depl
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Dynamic Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md
Title: An overview of dynamic scoping (preview) description: This article provides information about dynamic scoping (preview), its purpose and advantages.-+ Last updated 07/05/2023
The criteria will be evaluated at the scheduled run time, which will be the fina
> [!NOTE] > You can associate one dynamic scope to one schedule.
-## Prerequisites
[!INCLUDE [dynamic-scope-prerequisites.md](includes/dynamic-scope-prerequisites.md)]
update-center Manage Arc Enabled Servers Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-arc-enabled-servers-programmatically.md
Title: Programmatically manage updates for Azure Arc-enabled servers in Update management center (preview)
-description: This article tells how to use Update management center (preview) using REST API with Azure Arc-enabled servers.
-
+ Title: Programmatically manage updates for Azure Arc-enabled servers in Azure Update Manager (preview)
+description: This article tells how to use Azure Update Manager (preview) using REST API with Azure Arc-enabled servers.
+ Last updated 06/15/2023
# How to programmatically manage updates for Azure Arc-enabled servers
-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with update management (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md).
+This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with Azure Update Manager (preview) in Azure. If you're new to Azure Update Manager (preview) and you want to learn more, see [overview of Update Manager (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md).
-Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure).
+Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure).
-Support for Azure REST API to manage Azure Arc-enabled servers is available through the update management center (preview) virtual machine extension.
+Support for Azure REST API to manage Azure Arc-enabled servers is available through the Update Manager (preview) virtual machine extension.
## Update assessment
DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur
## Next steps
-* To view update assessment and deployment logs generated by Update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Manage Dynamic Scoping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-dynamic-scoping.md
Title: Manage various operations of dynamic scoping (preview). description: This article describes how to manage dynamic scoping (preview) operations -+ Last updated 07/05/2023
This article describes how to view, add, edit and delete a dynamic scope (preview).
-## Prerequisites
- [!INCLUDE [dynamic-scope-prerequisites.md](includes/dynamic-scope-prerequisites.md)] ## Add a Dynamic scope (preview) To add a Dynamic scope to an existing configuration, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to add a Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** > **Add a dynamic scope**.
To add a Dynamic scope to an existing configuration, follow these steps:
To view the list of Dynamic scopes (preview) associated to a given maintenance configuration, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update management center (preview)**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update Manager (preview)**.
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope. 1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration. ## Edit a Dynamic scope (preview)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to edit. Under **Actions** column, select the edit icon.
To view the list of Dynamic scopes (preview) associated to a given maintenance c
## Delete a Dynamic scope (preview)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to delete. Select **Remove dynamic scope** and then select **Ok**. ## View patch history of a Dynamic scope (preview)
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **History** > **Browse maintenance configurations** > **Maintenance configurations** to view the patch history of a dynamic scope.
Obtaining consent to apply updates is an important step in the workflow of dynam
#### [From Update Settings](#tab/us)
-1. In **Update management center**, go to **Overview** > **Update settings**.
+1. In **Update Manager**, go to **Overview** > **Update settings**.
1. In **Change Update settings**, select **+Add machine** to add the machines. 1. In the list of machines sorted as per the operating system, go to the **Patch orchestration** option and select **Azure-orchestrated with user managed schedules (Preview)** to confirm that:
Obtaining consent to apply updates is an important step in the workflow of dynam
* [Deploy updates now (on-demand) for single machine](deploy-updates.md) * [Schedule recurring updates](scheduled-patching.md) * [Manage update settings via Portal](manage-update-settings.md)
-* [Manage multiple machines using update management center](manage-multiple-machines.md)
+* [Manage multiple machines using update Manager](manage-multiple-machines.md)
update-center Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md
Title: Manage multiple machines in update management center (preview)
-description: The article details how to use Update management center (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal.
-
+ Title: Manage multiple machines in Azure Update Manager (preview)
+description: The article details how to use Azure Update Manager (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal.
+ Last updated 05/02/2023
-# Manage multiple machines with update management center (Preview)
+# Manage multiple machines with Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)**. If you fail to update the patch orchestration, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-This article describes the various features that update management center (Preview) offers to manage the system updates on your machines. Using the update management center (preview), you can:
+This article describes the various features that Update Manager (Preview) offers to manage the system updates on your machines. Using the Update Manager (preview), you can:
- Quickly assess the status of available operating system updates. - Deploy updates.
This article describes the various features that update management center (Previ
Instead of performing these actions from a selected Azure VM or Arc-enabled server, you can manage all your machines in the Azure subscription.
-## View update management center (Preview) status
+## View update Manager (preview) status
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update management center(Preview)**.
+1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update Manager(preview)**.
- :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update management center overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png":::
+ :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update manager overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png":::
In the **Overview** page - the summary tiles show the following status:
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
- **Update status of machines**ΓÇöshows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected and as per the classification selection, the tile is updated.
- The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used update management center (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days.
+ The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used Update Manager (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days.
From the assessment data available, machines are classified into the following categories:
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
## Summary of machine status
-Update management center (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to update management center (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings.
+Update Manager (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings.
- In the update management center (preview) page, select **Machines** from the left menu.
+ In the Update Manager (preview) page, select **Machines** from the left menu.
- :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of update management center(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png":::
+ :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of Update Manager(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png":::
On the page, the table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment. - **Update status**ΓÇöthe total number of updates available identified as applicable to the machine's OS.
For machines that haven't had a compliance assessment scan for the first time, y
:::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot of assessment banner on Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png":::
-Select a machine from the list to open update management center (Preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment.
+Select a machine from the list to open Update Manager (preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment.
### Deploy the updates
You can create a recurring update deployment for your machines. Select your mach
## Update deployment history
-Update management center (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update management center (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update management center (preview), select **History** from the left menu.
+Update Manager (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update Manager (preview), select **History** from the left menu.
## Update deployment history by machines
When you select any one maintenance run ID record, you can view an expanded stat
The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
-When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update management center (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included.
+When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included.
## Next steps * To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md)
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
+* To view update assessment and deployment logs generated by update manager (preview), see [query logs](query-logs.md).
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
Title: Manage update configuration settings in Update management center (preview)
-description: The article describes how to manage the update settings for your Windows and Linux machines managed by Update management center (preview).
-
+ Title: Manage update configuration settings in Azure Update Manager (preview)
+description: The article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview).
+ Last updated 05/30/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-The article describes how to configure update settings from Update management center (preview) in Azure, to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines.
+The article describes how to configure update settings from Azure Update Manager (preview), to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines.
## Configure settings on single VM
The article describes how to configure update settings from Update management ce
To configure update settings on your machines on a single VM, follow these steps: >[!NOTE]
-> You can schedule updates from the Overview blade or Machines blade in update management center (preview) page or from the selected VM.
+> You can schedule updates from the Overview blade or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/manage-single-overview) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Overview**, select your **Subscription**, and select **Update settings**.
+1. In **Update Manager**, select **Overview**, select your **Subscription**, and select **Update settings**.
1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. 1. In **Select resources**, select the machine and select **Add**. 1. In the **Change update settings** page, you will see the machine classified as per the operating system with the list of following updates that you can select and apply.
To configure update settings on your machines on a single VM, follow these steps
- **Periodic assessment** - The **periodic Assessment** is set to run every 24 hours. You can either enable or disable this setting.
- - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use update management center (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting.
+ - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use Update Manager (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting.
- **Patch orchestration** option provides the following:
To configure update settings on your machines on a single VM, follow these steps
# [From Machines blade](#tab/manage-single-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Machines** > your **subscription**.
+1. In **Update Manager**, select **Machines** > your **subscription**.
1. Select the checkbox of your machine from the list and select **Update settings**. 1. Select **Update Settings** to proceed with the type of update for your machine. 1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings.
To configure update settings on your machines at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Overview**, select your **Subscription** and select **Update settings**.
+1. In **Update Manager**, select **Overview**, select your **Subscription** and select **Update settings**.
1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). # [From Machines blade](#tab/manage-scale-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list.
+1. In **Update Manager**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list.
1. Select **Update Settings** to proceed with the type of update for your machines. 1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm).
A notification appears to confirm that the update settings are successfully chan
## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Manage Updates Customized Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md
Title: Overview of customized images in Update management center (preview).
+ Title: Overview of customized images in Azure Update Manager (preview).
description: The article describes about customized images, how to register, validate the customized images for public preview and its limitations.-+ Last updated 05/02/2023
This article describes the customized image support, how to enable the subscript
## Asynchronous check to validate customized image support
-If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update management Center (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported.
+If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported.
-Unlike marketplace images where support is validated even before Update management center operation is triggered. Here, there are no pre-existing validations in place and the Update management center operations are triggered and only their success or failure determines support.
+Unlike marketplace images where support is validated even before Update Manager operation is triggered. Here, there are no pre-existing validations in place and the Update Manager operations are triggered and only their success or failure determines support.
For instance, assessment call, will attempt to fetch the latest patch that is available from the image's OS family to check support. It stores this support-related data in Azure Resource Graph (ARG) table, which you can query to see the support status for your Azure Compute Gallery image.
update-center Manage Vms Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md
Title: Programmatically manage updates for Azure VMs
-description: This article tells how to use update management center (preview) in Azure using REST API with Azure virtual machines.
-
+description: This article tells how to use Azure Update Manager (preview) in Azure using REST API with Azure virtual machines.
+ Last updated 06/15/2023
# How to programmatically manage updates for Azure VMs
-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with update management center (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md).
+This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with Azure Update Manager (preview) in Azure. If you're new to Update Manager (preview) and you want to learn more, see [overview of Azure Update Manager (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md).
-Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/).
+Azure Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/).
-Support for Azure REST API to manage Azure VMs is available through the update management center (preview) virtual machine extension.
+Support for Azure REST API to manage Azure VMs is available through the Update Manager (preview) virtual machine extension.
## Update assessment
DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Manage Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md
Title: Create reports using workbooks in update management center (preview)..
+ Title: Create reports using workbooks in Azure Update Manager (preview).
description: This article describes how to create and manage workbooks for VM insights.-+ Last updated 05/23/2023
-# Create reports in update management center (preview)
+# Create reports in Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
This article describes how to create a workbook and how to edit a workbook to cr
## Create a workbook
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
-1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
+1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery.
1. Select **Quick start** tile > **Empty** or alternatively, you can select **+New** to create a workbook. 1. Select **+Add** to select any [elements](../azure-monitor/visualize/workbooks-create-workbook.md#create-a-new-azure-workbook) to add to the workbook.
This article describes how to create a workbook and how to edit a workbook to cr
1. Select **Done Editing**. ## Edit a workbook
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
-1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery.
-1. Select **Update management center** tile > **Overview** to view the Update management center (Preview)|Workbooks|Overview page.
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
+1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery.
+1. Select **Update Manager** tile > **Overview** to view the Update Manager (preview)|Workbooks|Overview page.
1. Select your subscription, and select **Edit** to enable the edit mode for all the four options. - Machines overall status & configuration
This article describes how to create a workbook and how to edit a workbook to cr
* [Deploy updates now (on-demand) for single machine](deploy-updates.md) * [Schedule recurring updates](scheduled-patching.md) * [Manage update settings via Portal](manage-update-settings.md)
-* [Manage multiple machines using update management center](manage-multiple-machines.md)
+* [Manage multiple machines using update manager](manage-multiple-machines.md)
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
Title: Update management center (preview) overview
-description: The article tells what update management center (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments.
-
+ Title: Azure Update Manager (preview) overview
+description: The article tells what Azure Update Manager (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments.
+ Last updated 07/05/2023
-# About Update management center (preview)
+# About Azure Update Manager (preview)
> [!Important]
-> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update management center (Preview) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
-> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC.
+> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update manager (preview) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
+> - Guidance for migrating from Automation Update management to Update manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC.
-Update management center (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update management center (preview) to make real-time updates or schedule them within a defined maintenance window.
+Update Manager (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update Manager (preview) to make real-time updates or schedule them within a defined maintenance window.
-You can use the update management center (preview) in Azure to:
+You can use the Update Manager (preview) in Azure to:
- Oversee update compliance for your entire fleet of machines in Azure, on-premises, and other cloud environments. - Instantly deploy critical updates to help secure your machines.
You can use the update management center (preview) in Azure to:
We also offer other capabilities to help you manage updates for your Azure Virtual Machines (VM) that you should consider as part of your overall update management strategy. Review the Azure VM [Update options](../virtual-machines/updates-maintenance-overview.md) to learn more about the options available.
-Before you enable your machines for update management center (preview), make sure that you understand the information in the following sections.
+Before you enable your machines for Update Manager (preview), make sure that you understand the information in the following sections.
> [!IMPORTANT]
-> - Update management center (preview) doesnΓÇÖt store any customer data.
-> - Update management center (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA).
-> - While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - Update Manager (preview) doesnΓÇÖt store any customer data.
+> - Update Manager (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA).
+> - While update manager is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Key benefits
-Update management center (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update management center (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below:
+Update Manager (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below:
- Provides native experience with zero on-boarding. - Built as native functionality on Azure Compute and Azure Arc for Servers platform for ease of use.
Update management center (preview) has been redesigned and doesn't depend on Azu
- Global availability in all Azure Compute and Azure Arc regions. - Works with Azure roles and identity. - Granular access control at per resource level instead of access control at Automation account and Log Analytics workspace level.
- - Update management center now as Azure Resource Manager based operations. It allows RBAC and roles based of ARM in Azure.
+ - Azure Update Manager now as Azure Resource Manager based operations. It allows RBAC and roles based of ARM in Azure.
- Enhanced flexibility - Ability to take immediate action either by installing updates immediately or schedule them for a later date. - Check updates automatically or on demand. - Helps secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md) or custom maintenance schedules. - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month.
-The following diagram illustrates how update management center (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux.
+The following diagram illustrates how Update Manager (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux.
-![Update center workflow](./media/overview/update-management-center-overview.png)
+![Update Manager workflow](./media/overview/update-management-center-overview.png)
-To support management of your Azure VM or non-Azure machine, update management center (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any update management center operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The update management center (preview) extension is installed and managed using the following:
+To support management of your Azure VM or non-Azure machine, Update Manager (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update manager (preview) operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The Update Manager (preview) extension is installed and managed using the following:
- [Azure virtual machine Windows agent](../virtual-machines/extensions/agent-windows.md) or [Azure virtual machine Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs. - [Azure arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers.
- The extension agent installation and configuration are managed by the update management center (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The update management center (preview) extension runs code locally on the machine to interact with the operating system, and it includes:
+ The extension agent installation and configuration are managed by the Update Manager (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager (preview) extension runs code locally on the machine to interact with the operating system, and it includes:
- Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager. - Initiating the download and installation of approved updates with Windows Update client or Linux package manager.
-All assessment information and update installation results are reported to update management center (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results.
+All assessment information and update installation results are reported to Update Manager (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results.
-The machines assigned to update management center (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in update management center (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository.
+The machines assigned to Update Manager (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in Update Manager (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository.
>[!NOTE]
-> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with update management center (preview).
+> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with Update Manager (preview).
## Prerequisites
-Along with the prerequisites listed below, see [support matrix](support-matrix.md) for update management center (preview).
+Along with the prerequisites listed below, see [support matrix](support-matrix.md) for Update Manager (preview).
### Role
Arc enabled server | [Azure Connected Machine Resource Administrator](../azure-a
### Permissions
-You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the update management center (preview).
+You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the Update Manager (preview).
**Actions** |**Permission** |**Scope** | | | |
You need the following permissions to create and manage update deployments. The
For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems). > [!NOTE]
-> Currently, update management center (preview) has the following limitations regarding the operating system support:
+> Currently, Update Manager (preview) has the following limitations regarding the operating system support:
> - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.
-> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in update management center (preview).
+> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview).
>
-> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update management center (preview). [Learn more](support-matrix.md#supported-operating-systems).
+> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview). [Learn more](support-matrix.md#supported-operating-systems).
## VM Extensions
To view the available extensions for a VM in the Azure portal, follow these step
### Network planning
-To prepare your network to support update management center (preview), you may need to configure some infrastructure components.
+To prepare your network to support Update Manager (preview), you may need to configure some infrastructure components.
For Windows machines, you must allow traffic to any endpoints required by Windows Update agent. You can find an updated list of required endpoints in [Issues related to HTTP/Proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) (WSUS) deployment, you must also allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry).
For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../v
- [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md) - [Manage update settings via Portal](manage-update-settings.md)-- [Manage multiple machines using update management center](manage-multiple-machines.md)
+- [Manage multiple machines using Update manager](manage-multiple-machines.md)
update-center Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/periodic-assessment-at-scale.md
Title: Enable periodic assessment using policy
-description: This article describes how to manage the update settings for your Windows and Linux machines managed by update management center (preview).
-
+description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview).
+ Last updated 04/21/2022
# Automate assessment at scale using Policy to see latest update status
-This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, update management center (preview) fetches updates on your machine once every 24 hours.
+This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, Update Manager (preview) fetches updates on your machine once every 24 hours.
## Enable Periodic assessment for your Azure machines using Policy
You can monitor the compliance of resources under **Compliance** and remediation
## Enable Periodic assessment for your Arc machines using Policy 1. Go to **Policy** from the Azure portal and under **Authoring**, **Definitions**.
-1. From the **Category** dropdown, select **Update management center**. Select *[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers* for Arc-enabled machines.
+1. From the **Category** dropdown, select **Update Manager**. Select *[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers* for Arc-enabled machines.
1. When the Policy Definition opens, select **Assign**. 1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope and select **Next**. 1. In **Parameters**, uncheck **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select *AutomaticByPlatform*, select *Operating system* and select **Next**. You need to create separate policies for Windows and Linux.
You can monitor compliance of resources under **Compliance** and remediation sta
## Monitor if Periodic Assessment is enabled for your machines (both Azure and Arc-enabled machines) 1. Go to **Policy** from the Azure portal and under **Authoring**, go to **Definitions**.
-1. From the Category dropdown above, select **Update management center**. Select *[Preview]: Machines should be configured to periodically check for missing system updates*.
+1. From the Category dropdown above, select **Update Manager**. Select *[Preview]: Machines should be configured to periodically check for missing system updates*.
1. When the Policy Definition opens, select **Assign**. 1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope. Select **Next.** 1. In **Parameters** and **Remediation**, select **Next.**
You can monitor compliance of resources under **Compliance** and remediation sta
## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md
Title: Configure schedule patching on Azure VMs to ensure business continuity in update management center (preview).
-description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Update management center (preview).
-
+ Title: Configure schedule patching on Azure VMs to ensure business continuity in Azure Update Manager (preview).
+description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager (preview).
+ Last updated 05/09/2023
Additionally, in some instances, when you remove the schedule from a VM, there i
To identify the list of VMs with the associated schedules for which you have to enable new VM property, follow these steps:
-1. Go to **Update management center (Preview)** home page and select **Machines** tab.
+1. Go to **Update Manager (preview)** home page and select **Machines** tab.
1. In **Patch orchestration** filter, select **Azure Managed - Safe Deployment**. 1. Use the **Select all** option to select the machines and then select **Export to CSV**. 1. Open the CSV file and in the column **Associated schedules**, select the rows that have an entry.
You can update the patch orchestration option for existing VMs that either alrea
To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Update management center (Preview)**, select **Update Settings**.
+1. Go to **Update Manager (preview)**, select **Update Settings**.
1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select *Customer Managed Schedules* and then select **Save**.
To update the patch mode, follow these steps:
To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to **Update management center (Preview)**, select **Update Settings**.
+1. Go to **Update Manager (preview)**, select **Update Settings**.
1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select ***Azure Managed - Safe Deployment*** and then select **Save**.
Scenario 8 | No | False | No | Neither the autopatch nor the schedule patch will
## Next steps
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/query-logs.md
Title: Query logs and results from Update management center (preview)
-description: The article provides details on how you can review logs and search results from update management center (preview) in Azure using Azure Resource Graph
+ Title: Query logs and results from Update Manager (preview)
+description: The article provides details on how you can review logs and search results from update manager (preview) in Azure using Azure Resource Graph
Last updated 04/21/2022
-# Overview of query logs in update management center (Preview)
+# Overview of query logs in Azure Update Manager (preview)
-Logs created from operations like update assessments and installations are stored by Update management center (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update management center (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources.
+Logs created from operations like update assessments and installations are stored by Update Manager (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources.
Azure Resource Graph's query language is based on the [Kusto query language](../governance/resource-graph/concepts/query-language.md) used by Azure Data Explorer.
-The article describes the structure of the logs from Update management center (Preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs.
+The article describes the structure of the logs from Update Manager (preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs.
## Log structure
-Update management center (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph.
+Update Manager (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph.
### Patch assessment results
If the `PROPERTIES` property for the resource type is `patchassessmentresults/so
|`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null.| |`classifications` |Category of which the specific update belongs to as per the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of category, then the value is `Others` (for Linux) or `Updates` (for Windows Server). | |`rebootRequired` |Value indicates if the specific update requires the OS to reboot to complete the installation. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't require a reboot, then the value is `false`.|
-|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if update management center (preview) can reboot the target machine. |
+|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if Update Manager (preview) can reboot the target machine. |
|`patchName` |Name or label for the specific update generated by the machine's OS package manager or update service.| |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service.| |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`.|
If the `PROPERTIES` property for the resource type is `patchinstallationresults/
|`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null. | |`classifications` |Category that the specific update belongs to as per the OS vendor. As provided by machine's OS update service or package manager. If your OS package manager or update service, doesn't provide the detail of category, then the value of the field will be Others (for Linux) and Updates (for Windows Server). | |`rebootRequired` |Flag to specify if the specific update requires the OS to reboot to complete installation. As provided by machine's OS update service or package manager. If your OS package manager or update service doesn't provide information regarding need of OS reboot, then the value of the field will be set to 'false'. |
-|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing update management center (preview) to reboot the OS. |
+|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing Update Manager (preview) to reboot the OS. |
|`patchName` |Name or Label for the specific update as provided by the machine's OS package manager or update service. | |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service. | |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`. |
If the `PROPERTIES` property for the resource type is `configurationassignments`
## Next steps - For details of sample queries, see [Sample query logs](sample-query-logs.md).-- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) update management center (preview).
+- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Quickstart On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md
Title: Quickstart - deploy updates in using update management center in the Azure portal
-description: This quickstart helps you to deploy updates immediately and view results for supported machines in update management center (preview) using the Azure portal.
+ Title: Quickstart - deploy updates in using update manager in the Azure portal
+description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager (preview) using the Azure portal.
Last updated 04/21/2022
# Quickstart: Check and install on-demand updates
-Using the Update management center (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually.
+Using the Update Manager (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually.
This quickstart details you how to perform manual assessment and apply updates on a selected Azure virtual machine(s) or Arc-enabled server on-premises or in cloud environments.
This quickstart details you how to perform manual assessment and apply updates o
## Check updates
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. SelectΓÇ»**Getting started**, **On-demand assessment and updates**, selectΓÇ»**Check for updates**.
For the assessed machines that are reporting updates, you can configure [hotpatc
To configure the settings on your machines, follow these steps:
-1. In **Update management center (Preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**.
+1. In **Update Manager (preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**.
In the **Change update settings** page, by default **Properties** is selected. 1. Select from the list of update settings to apply them to the selected machines.
To configure the settings on your machines, follow these steps:
As per the last assessment performed on the selected machines, you can now select resources and machines to install the updates
-1. In the **Update management center(Preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**.
+1. In the **Update Manager (preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**.
1. In the **Install one-time updates** page, select one or more machines from the list in the **Machines** tab and click **Next**.
As per the last assessment performed on the selected machines, you can now selec
1. In **Review + install**, verify the update deployment options and select **Install**.
-A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update management center**, **History** page.
+A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update Manager**, **History** page.
## Next steps
update-center Sample Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/sample-query-logs.md
Title: Sample query logs and results from Update management center (preview)
-description: The article provides details of sample query logs from update management center (preview) in Azure using Azure Resource Graph
-
+ Title: Sample query logs and results from Azure Update Manager (preview)
+description: The article provides details of sample query logs from Azure Update Manager (preview) in Azure using Azure Resource Graph
+ Last updated 04/21/2022
maintenanceresources
``` ## Next steps-- Review logs and search results from update management center (preview) in Azure using [Azure Resource Graph](query-logs.md).-- Troubleshoot issues in update management center (preview), see the [Troubleshoot](troubleshoot.md).
+- Review logs and search results from Update Manager (preview) in Azure using [Azure Resource Graph](query-logs.md).
+- Troubleshoot issues in Update Manager (preview), see the [Troubleshoot](troubleshoot.md).
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Title: Scheduling recurring updates in Update management center (preview)
-description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines.
-
+ Title: Scheduling recurring updates in Azure Update Manager (preview)
+description: The article details how to use Azure Update Manager (preview) in Azure to set update schedules that install recurring updates on your machines.
+ Last updated 05/30/2023
> - For a seamless scheduled patching experience, we recommend that for all Azure VMs, you update the patch orchestration to **Customer Managed Schedules (Preview)** by **30th June 2023**. If you fail to update the patch orchestration by **30th June 2023**, you can experience a disruption in business continuity because the schedules will fail to patch the VMs.[Learn more](prerequsite-for-schedule-patching.md).
-You can use update management center (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale.
+You can use Update Manager (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale.
-Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
+Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
## Prerequisites for scheduled patching
-1. See [Prerequisites for Update management center (preview)](./overview.md#prerequisites)
+1. See [Prerequisites for Update Manager (preview)](./overview.md#prerequisites)
1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules (Preview)**. For more information, see [how to enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. > [!Note]
Update management center (preview) uses maintenance control schedule instead of
1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. 1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently.
+## Configure reboot settings
+
+The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
+ ## Service limits The following are the recommended limits for the mentioned indicators:
The following are the recommended limits for the mentioned indicators:
## Schedule recurring updates on single VM >[!NOTE]
-> You can schedule updates from the Overview or Machines blade in update management center (preview) page or from the selected VM.
+> You can schedule updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/schedule-updates-single-overview)
To schedule recurring updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**.
1. In **Create new maintenance configuration**, you can create a schedule for a single VM.
To schedule recurring updates on a single VM, follow these steps:
1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note]
- > Update management center (preview) doesn't support driver updates.
+ > Update Manager (preview) doesn't support driver updates.
1. In the **Tags** page, assign tags to maintenance configurations.
To schedule recurring updates on a single VM, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**.
1. In **Create new maintenance configuration**, you can create a schedule for a single VM, assign machine and tags. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule.
To schedule recurring updates at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Overview**, select your **Subscription** and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Overview**, select your **Subscription** and select **Schedule updates**.
1. In the **Create new maintenance configuration** page, you can create a schedule for multiple machines.
To schedule recurring updates at scale, follow these steps:
1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note]
- > Update management center (preview) doesn't support driver updates.
+ > Update Manager (preview) doesn't support driver updates.
1. In the **Tags** page, assign tags to maintenance configurations.
To schedule recurring updates at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In **Update management center (Preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**.
+1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**.
In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule.
A notification appears that the deployment is created.
## Attach a maintenance configuration A maintenance configuration can be attached to multiple machines. It can be attached to machines at the time of creating a new maintenance configuration or even after you've created one.
- 1. In **Update management center**, select **Machines** and select your **Subscription**.
+ 1. In **Update Manager**, select **Machines** and select your **Subscription**.
1. Select your machine and in **Updates (Preview)**, select **Scheduled updates** to create a maintenance configuration or attach existing maintenance configuration to the scheduled recurring updates. 1. In **Scheduling**, select **Attach maintenance configuration**. 1. Select the maintenance configuration that you would want to attach and select **Attach**.
You can create a new Guest OS update maintenance configuration or modify an exis
## Onboarding to Schedule using Policy
-The update management center (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case.
+The update Manager (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case.
> [!NOTE] > This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules (Preview)** as it is a prerequisite for scheduled patching.
Policy allows you to assign standards and assess compliance at scale. [Learn mor
1. Under **Basics**, in the **Assign policy** page: - In **Scope**, choose your subscription, resource group, and choose **Select**. - Select **Policy definition** to view a list of policies.
- - In **Available Definitions**, select **Built in** for Type and in search, enter - *[Preview] Schedule recurring updates using Update Management Center* and click **Select**.
+ - In **Available Definitions**, select **Built in** for Type and in search, enter - *[Preview] Schedule recurring updates using Update Manager* and click **Select**.
:::image type="content" source="./media/scheduled-updates/dynamic-scoping-defintion.png" alt-text="Screenshot that shows on how to select the definition.":::
To view the current compliance state of your existing resources:
:::image type="content" source="./media/scheduled-updates/dynamic-scoping-policy-compliance.png" alt-text="Screenshot that shows on policy compliance."::: ## Check your scheduled patching run
-You can check the deployment status and history of your maintenance configuration runs from the Update management center portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id).
+You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id).
## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
Title: Update management center (preview) support matrix
+ Title: Azure Update Manager (preview) support matrix
description: Provides a summary of supported regions and operating system settings.-+ Last updated 07/11/2023
-# Support matrix for update management center (preview)
+# Support matrix for Azure Update Manager (preview)
-This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by update management center (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers.
+This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Manager (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers.
## Update sources supported
-**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the update management center (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations)
+**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the Update Manager (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations)
-**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in update management center (preview) depend on where the machines are configured to report.
+**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager (preview) depend on where the machines are configured to report.
## Types of updates supported ### Operating system updates
-Update management center (preview) supports operating system updates for both Windows and Linux.
+Update Manager (preview) supports operating system updates for both Windows and Linux.
> [!NOTE]
-> Update management center (preview) doesn't support driver Updates.
+> Update Manager (preview) doesn't support driver Updates.
### First party updates on Windows By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software. Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
+- For Servers configured to patch on a schedule from Update Manager (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.
```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
Use one of the following options to perform the settings change at scale:
$ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" $ServiceManager.AddService2($ServiceId,7,"") ```-- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
+- For servers running Server 2016 or later which are not using Update Manager scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).
> [!NOTE] > Run the following PowerShell script on the server to disable first party updates.
Use one of the following options to perform the settings change at scale:
### Third-party updates
-**Windows**: Update Management relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows update management to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
+**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it is scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it.
+> [!NOTE]
+> Update Manager does not support managing the Microsoft Configuration Manager client.
+ ## Supported regions
-Update management center (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use update management center (preview).
+Update Manager (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use Update Manager (preview).
# [Azure virtual machine](#tab/azurevm)
-Update management center (preview) is available in all Azure public regions where compute virtual machines are available.
+Update Manager (preview) is available in all Azure public regions where compute virtual machines are available.
# [Azure Arc-enabled servers](#tab/azurearc)
-Update management center (preview) is supported in the following regions currently. It implies that VMs must be in below regions:
+Update Manager (preview) is supported in the following regions currently. It implies that VMs must be in below regions:
**Geography** | **Supported Regions** |
United States | Central US </br> East US </br> East US 2</br> North Central US <
> [!NOTE] > - All operating systems are assumed to be x64. x86 isn't supported for any operating system.
-> - Update management center (preview) doesn't support CIS hardened images.
+> - Update Manager (preview) doesn't support CIS hardened images.
# [Azure VMs](#tab/azurevm-os) > [!NOTE]
-> Currently, update management center has the following limitations regarding the operating system support:
+> Currently, Update Manager has the following limitations regarding the operating system support:
> - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.
-> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in update management center (preview).
+> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview).
>
-> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update management center (preview).
+> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview).
**Marketplace/PIR images**
The following table lists the operating systems that aren't supported:
| Azure Kubernetes Nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/azure/aks/node-updates-kured).|
-As the Update management center (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md).
+As the Update Manager (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md).
## Next steps
update-center Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md
Title: Troubleshoot known issues with update management center (preview)
-description: The article provides details on the known issues and troubleshooting any problems with update management center (preview).
-
+ Title: Troubleshoot known issues with Azure Update Manager (preview)
+description: The article provides details on the known issues and troubleshooting any problems with Azure Update Manager (preview).
+ Last updated 05/30/2023
-# Troubleshoot issues with update management center (preview)
+# Troubleshoot issues with Azure Update Manager (preview)
-This article describes the errors that might occur when you deploy or use update management center (preview), how to resolve them and the known issues and limitations of scheduled patching.
+This article describes the errors that might occur when you deploy or use Update Manager (preview), how to resolve them and the known issues and limitations of scheduled patching.
## General troubleshooting
If you don't want any patch installation to be orchestrated by Azure or aren't u
### Cause
-The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Management relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Management can't properly report on the patches that are needed or installed.
+The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed.
### Resolution
To review the logs related to all actions performed by the extension, on Windows
- For concurrent/conflicting schedule, only one schedule will be triggered. The other schedule will be triggered once a schedule is finished. - If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in case of Azure VMs.-- Policy definition *[Preview]: Schedule recurring updates using Update Management Center* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false.
+- Policy definition *[Preview]: Schedule recurring updates using Update Manager* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false.
### Scenario: Unable to apply patches for the shutdown machines
Setting a longer time range for maximum duration when triggering an [on-demand u
## Next steps
-* To learn more about Azure Update management center (preview), see the [Overview](overview.md).
-* To view logged results from all your machines, see [Querying logs and results from update management center (preview)](query-logs.md).
+* To learn more about Azure Update Manager (preview), see the [Overview](overview.md).
+* To view logged results from all your machines, see [Querying logs and results from update Manager (preview)](query-logs.md).
update-center Tutorial Dynamic Grouping For Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/tutorial-dynamic-grouping-for-scheduled-patching.md
Title: Schedule updates on Dynamic scoping (preview). description: In this tutorial, you learn how to group machines, dynamically apply the updates at scale.-+ Last updated 07/05/2023
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Prerequisites
--- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*.-- Associate a Schedule with the VM. ## Create a Dynamic scope To create a dynamic scope, follow the steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview).
1. Select **Overview** > **Schedule updates** > **Create a maintenance configuration**. 1. In the **Create a maintenance configuration** page, enter the details in the **Basics** tab and select **Maintenance scope** as *Guest* (Azure VM, Arc-enabled VMs/servers). 1. Select **Dynamic Scopes** and follow the steps to [Add Dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope-preview).
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Title: Updates and maintenance in update management center (preview).
-description: The article describes the updates and maintenance options available in Update management center (preview).
-
+ Title: Updates and maintenance in Azure Update Manager (preview).
+description: The article describes the updates and maintenance options available in Azure Update Manager (preview).
+ Last updated 05/23/2023
-# Update options in update management center (preview)
+# Update options in Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
> - For Arc-enabled servers, the updates and maintenance options such as Automatic VM Guest patching in Azure, Windows automatic updates and Hotpatching aren't supported.
-This article provides an overview of the various update and maintenance options available by update management center (preview).
+This article provides an overview of the various update and maintenance options available by Update Manager (preview).
-Update management center (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on.
+Update Manager (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on.
## Update Now/One-time update
-Update management center (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm).
+Update Manager (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm).
+ ## Scheduled patching You can create a schedule on a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications.
-Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
+Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control).
Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. > [!NOTE]
Start using [scheduled patching](scheduled-patching.md) to create and save recur
This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md).
-In **Update management center** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property.
+In **Update Manager** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property.
## Windows automatic updates
This mode of patching allows operating system to automatically install updates a
Hotpatching allows you to install updates on supported Windows Server Azure Edition virtual machines without requiring a reboot after installation. It reduces the number of reboots required on your mission critical application workloads running on Windows Server. For more information, see [Hotpatch for new virtual machines](../automanage/automanage-hotpatch.md)
-Hotpatching property is available as a setting in Update management center (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm)
+Hotpatching property is available as a setting in Update Manager (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm)
:::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png"::: ## Next steps
-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview).
update-center View Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/view-updates.md
Title: Check update compliance in Update management center (preview)
-description: The article details how to use Azure Update management center (preview) in the Azure portal to assess update compliance for supported machines.
-
+ Title: Check update compliance in Azure Update Manager (preview)
+description: The article details how to use Azure Update Manager (preview) in the Azure portal to assess update compliance for supported machines.
+ Last updated 05/31/2023
-# Check update compliance with update management center (preview)
+# Check update compliance with Azure Update Manager (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-This article details how to check the status of available updates on a single VM or multiple VMs using update management center (preview).
+This article details how to check the status of available updates on a single VM or multiple VMs using Update Manager (preview).
## Check updates on single VM >[!NOTE]
-> You can check the updates from the Overview or Machines blade in update management center (preview) page or from the selected VM.
+> You can check the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM.
# [From Overview blade](#tab/singlevm-overview) 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (Preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
+1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
1. In **Select resources and check for updates**, choose the machine for which you want to check the updates and select **Check for updates**.
This article details how to check the status of available updates on a single VM
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (preview), **Machines**, select your **Subscription** to view all your machines.
+1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines.
1. Select your machine from the checkbox and select **Check for updates**, **Assess now** or alternatively, you can select your machine, in **Updates Preview**, select **Assess updates**, and in **Trigger assess now**, select **OK**.
This article details how to check the status of available updates on a single VM
1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.
-1. In **Updates**, select **Go to Updates using Update Management Center**.
+1. In **Updates**, select **Go to Updates using Update Manager**.
:::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot showing selection of updates from Home page.":::
To check the updates on your machines at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
+1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**.
1. In **Select resources and check for updates**, choose your machines for which you want to check the updates and select **Check for updates**.
To check the updates on your machines at scale, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In Update management center (preview), **Machines**, select your **Subscription** to view all your machines.
+1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines.
1. Select the **Select all** to choose all your machines and select **Check for updates**. 1. Select **Assess now** to perform the assessment.
- A notification appears when the operation is initiated and completed. After a successful scan, the **Update management center (Preview) | Machines** page is refreshed to display the updates.
+ A notification appears when the operation is initiated and completed. After a successful scan, the **Update Manager (preview) | Machines** page is refreshed to display the updates.
> [!NOTE]
-> In update management center (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository.
+> In update Manager (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository.
## Next steps * Learn about deploying updates on your machines to maintain security compliance by reading [deploy updates](deploy-updates.md).
-* To view the update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update management center (preview).
+* To view the update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md).
+* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update Manager (preview).
update-center Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md
Title: What's new in Update management center (Preview)
-description: Learn about what's new and recent updates in the Update management center (Preview) service.
-
+ Title: What's new in Azure Update Manager (preview)
+description: Learn about what's new and recent updates in the Azure Update Manager (preview) service.
+ Last updated 07/05/2023
-# What's new in Update management center (Preview)
+# What's new in Azure Update Manager (Preview)
-[Update management center (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update management center (Preview).
+[Azure Update Manager (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update Manager (preview).
## July 2023
Dynamic scope (preview) is an advanced capability of schedule patching. You can
### Customized image support
-Update management center (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems).
+Update Manager (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems).
### Multi-subscription support
-The limit on the number of subscriptions that you can manage to use the Update management center (preview) portal has now been removed. You can now manage all your subscriptions using the update management center (preview) portal.
+The limit on the number of subscriptions that you can manage to use the Update Manager (preview) portal has now been removed. You can now manage all your subscriptions using the update Manager (preview) portal.
## April 2023
A new patch orchestration - **Customer Managed Schedules (Preview)** is introduc
### New region support
-Update management center (Preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions).
+Update Manager (preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions).
## October 2022
update-center Whats Upcoming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md
Title: What's upcoming in Update management center (Preview)
-description: Learn about what's upcoming and updates in the Update management center (Preview) service.
-
+ Title: What's upcoming in Azure Update Manager (preview)
+description: Learn about what's upcoming and updates in the Update manager (preview) service.
+ Last updated 06/01/2023
-# What's upcoming in Update management center (Preview)
+# What's upcoming in Azure Update Manager (preview)
-The primary [what's New in Update management center (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features.
+The primary [what's New in Azure Update Manager (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features.
## Expanded support for Operating system and VM images Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), VMs created by Azure Migrate, Azure Backup, Azure Site Recovery, and marketplace images are upcoming in Q3, CY 2023. Until then, we recommend that you continue using [Automation update management](../automation/update-management/overview.md) for these images. [Learn more](support-matrix.md#supported-operating-systems).
-## Update management center will be GA soon
+## Update Manager will be GA soon
-Update management center will be declared GA soon.
+Update Manager will be declared GA soon.
+
+## Prescript and postscript
+
+The prescript and post-script will be available soon.
## Next steps
update-center Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/workbooks.md
Title: An overview of Workbooks description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports.-+ Last updated 01/16/2023
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update management center (preview).
+Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update Manager (preview).
## Key benefits - Provides a canvas for data analysis and creation of visual reports
The gallery lists all the saved workbooks and templates for your workspace. You
- In the **Recently modified** tile, you can view and edit the workbooks. -- In the **Update management center** tile, you can view the following summary:
+- In the **Update Manager** tile, you can view the following summary:
:::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot of workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png":::
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
To resolve this issue, first reinstall the side-by-side stack:
1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you must [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component.
-## Error: Session host VMs are stuck in Unavailable state
+## Error: Session hosts are stuck in Unavailable state
If your session host VMs are stuck in the Unavailable state, your VM didn't pass one of the health checks listed in [Health check](troubleshoot-statuses-checks.md#health-check). You must resolve the issue that's causing the VM to not pass the health check.
-## Error: VMs are stuck in the "Needs Assistance" state
+## Error: Session hosts are stuck in the Needs Assistance state
+
+There are several health checks that can cause your session host VMs to be stuck in the **Needs Assistance** state, *UrlsAccessibleCheck*. *MetaDataServiceCheck*, and *MonitoringAgentCheck*.
+
+### UrlsAccessibleCheck
If the session host doesn't pass the *UrlsAccessibleCheck* health check, you'll need to identify which [required URL](safe-url-list.md) your deployment is currently blocking. Once you know which URL is blocked, identify which setting is blocking that URL and remove it.
If your local hosts file is blocking the required URLs, make sure none of the re
**Name:** DataBasePath
+### MetaDataServiceCheck
+ If the session host doesn't pass the *MetaDataServiceCheck* health check, then the service can't access the IMDS endpoint. To resolve this issue, you'll need to do the following things: - Reconfigure your networking, firewall, or proxy settings to unblock the IP address 169.254.169.254.
If your issue is caused by a web proxy, add an exception for 169.254.169.254 in
netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="169.254.169.254" ```
+### MonitoringAgentCheck
+
+If the session host doesn't pass the *MonitoringAgentCheck* health check, you'll need to check the *Remote Desktop Services Infrastructure Geneva Agent* and validate if it is functioning correctly on the session host:
+
+1. Verify if the Remote Desktop Services Infrastructure Geneva Agent is installed on the session host. You can verify this in the list of installed programs on the session host. If you see multiple version of this agent installed, uninstall older versions and only keep the latest version installed.
+
+1. If you don't find the Remote Desktop Services Infrastructure Geneva Agent installed on the session host, please review logs located under *C:\Program Files\Microsoft RDInfra\GenevaInstall.txt* and see if installation is failing due to an error.
+
+1. Verify if scheduled task *GenevaTask_\<version\>* is created. This scheduled task must be enabled and running. If it's not, please reinstall the agent using the `.msi` file named **Microsoft.RDInfra.Geneva.Installer-x64-\<version\>.msi**, which is available at **C:\Program Files\Microsoft RDInfra**.
+ ## Error: Connection not found: RDAgent does not have an active connection to the broker Your session host VMs may be at their connection limit and can't accept new connections.
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
If your data isn't displaying properly, check the following common solutions:
- [Log Analytics Firewall Requirements](../azure-monitor/agents/log-analytics-agent.md#firewall-requirements). - Not seeing data from recent activity? You may want to wait for 15 minutes and refresh the feed. Azure Monitor has a 15-minute latency period for populating log data. To learn more, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md).
-If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. Review [known issues and limitations](#known-issues-and-limitations).
+If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. For more information, see [known issues and limitations](#known-issues-and-limitations).
# [Azure Monitor Agent (preview)](#tab/monitor)
If this article doesn't have the data point you need to resolve an issue, you ca
- To learn how to leave feedback, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md). - You can also leave feedback for Azure Virtual Desktop at the [Azure Virtual Desktop feedback hub](https://support.microsoft.com/help/4021566/windows-10-send-feedback-to-microsoft-with-feedback-hub-app). ++ ## Known issues and limitations The following are issues and limitations we're aware of and working to fix:
The following are issues and limitations we're aware of and working to fix:
- Do you see contradicting or unexpected connection times? While rare, a connection's completion event can go missing and can impact some visuals and metrics. - Time to connect includes the time it takes users to enter their credentials; this correlates to the experience but in some cases can show false peaks. -- ## Next steps - To get started, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
virtual-desktop Troubleshoot Statuses Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md
Title: Azure Virtual Desktop session host statuses and health checks description: How to troubleshoot the failed session host statuses and failed health checks-+ Last updated 05/03/2023--+ # Azure Virtual Desktop session host statuses and health checks
The following table lists all statuses for session hosts in the Azure portal eac
| Session host status | Description | How to resolve related issues | |||| |Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it's still listed as ΓÇ£Available." |N/A|
-|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
+|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: Session hosts are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state) to resolve the issue.|
|Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status changes to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. | |Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.| |Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This status doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).|
The health check is a test run by the agent on the session host. The following t
| Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this issue, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. | | Integrated Maintenance Data System (IMDS) reachable | Verifies that the service can't access the IMDS endpoint. | If this check fails, it's semi-fatal. There may be successful connections, but they won't contain logging information. To resolve this issue, you'll need to reconfigure your networking, firewall, or proxy settings. | | Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If restarting doesn't work, contact Microsoft support. |
-| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: VMs are stuck in the Needs Assistance state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state). |
+| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: Session hosts are stuck in the Needs Assistance state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state). |
| TURN (Traversal Using Relay NAT) Relay Access Health Check | When using [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks#how-rdp-shortpath-works) with an indirect connection, TURN uses User Datagram Protocol (UDP) to relay traffic between the client and session host through an intermediate server when direct connection isn't possible. | If this check fails, it's not fatal. Connections revert to the websocket TCP and the session host enters the "Needs assistance" state. To resolve the issue, follow the instructions in [Disable RDP Shortpath on managed and unmanaged windows clients using group policy](configure-rdp-shortpath.md?tabs=public-networks#disable-rdp-shortpath-on-managed-and-unmanaged-windows-clients-using-group-policy). | | App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps stop working for end-users. | | Domain reachable | Verifies the domain the session host is joined to is still reachable. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the domain. |
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
Title: Deploy the diagnostics tool for Azure Virtual Desktop (classic) - Azure
description: How to deploy the diagnostics UX tool for Azure Virtual Desktop (classic). + Last updated 12/15/2020
You can also interact with users on the session host:
## Next steps - Learn how to monitor activity logs at [Use diagnostics with Log Analytics](diagnostics-log-analytics-2019.md).-- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).
+- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).
virtual-desktop Manage Resources Using Ui Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md
Last updated 03/30/2020 -+
virtual-desktop Whats New Client Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-android-chrome-os.md
description: Learn about recent changes to the Remote Desktop client for Android
Previously updated : 01/04/2023 Last updated : 08/21/2023 # What's new in the Remote Desktop client for Android and Chrome OS
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
description: Learn about recent changes to the Remote Desktop client for macOS
Previously updated : 06/26/2023 Last updated : 08/21/2023 # What's new in the Remote Desktop client for macOS
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 08/11/2023 Last updated : 08/17/2023 ms.devlang: azurecli
ms.devlang: azurecli
# [Azure CLI](#tab/azure-cli)
-You can use the Azure CLI to create an incremental snapshot. You'll need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
+You can use the Azure CLI to create an incremental snapshot. You need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI.
-The following script will create an incremental snapshot of a particular disk:
+The following script creates an incremental snapshot of a particular disk:
```azurecli # Declare variables
yourDiskID=$(az disk show -n $diskName -g $resourceGroupName --query "id" --outp
az snapshot create -g $resourceGroupName -n $snapshotName --source $yourDiskID --incremental true ```
-> [!IMPORTANT]
-> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
- You can identify incremental snapshots from the same disk with the `SourceResourceId` property of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. You can use `SourceResourceId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots:
az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremen
# [Azure PowerShell](#tab/azure-powershell)
-You can use the Azure PowerShell module to create an incremental snapshot. You'll need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
+You can use the Azure PowerShell module to create an incremental snapshot. You need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest:
```PowerShell Install-Module -Name Az -AllowClobber -Scope CurrentUser
$snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk
New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig ```
-> [!IMPORTANT]
-> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
- You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes. You can use `SourceResourceId` and `SourceUniqueId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots:
$incrementalSnapshots
# [Portal](#tab/azure-portal) [!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../includes/virtual-machines-disks-incremental-snapshots-portal.md)]
-> [!IMPORTANT]
-> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
- # [Resource Manager Template](#tab/azure-resource-manager) You can also use Azure Resource Manager templates to create an incremental snapshot. You'll need to make sure the apiVersion is set to **2022-03-22** and that the incremental property is also set to true. The following snippet is an example of how to create an incremental snapshot with Resource Manager templates:
You can also use Azure Resource Manager templates to create an incremental snaps
] } ```
-> [!IMPORTANT]
-> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
-## Check status of snapshots or disks
-
-Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed. Similarly, Premium SSD v2 or Ultra Disks created from incremental snapshots can't be attached to a VM until the background process copying the data into the disk has completed.
-
-You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot and you can use the [Check disk creation status](#check-disk-creation-status) section to check the status of a background copy from a snapshot to a disk.
-
-### CLI
-
-You have two options for getting the status of snapshots. You can either get a [list of all incremental snapshots associated with a specific disk](#clilist-incremental-snapshots), and their respective status, or you can get the [status of an individual snapshot](#cliindividual-snapshot).
-
-#### CLI - List incremental snapshots
-
-The following script returns a list of all snapshots associated with a particular disk. The value of the `CompletionPercent` property of any snapshot must be 100 before it can be used. Replace `yourResourceGroupNameHere`, `yourSubscriptionId`, and `yourDiskNameHere` with your values then run the script:
-
-```azurecli
-# Declare variables and create snapshot list
-subscriptionId="yourSubscriptionId"
-resourceGroupName="yourResourceGroupNameHere"
-diskName="yourDiskNameHere"
-
-az account set --subscription $subscriptionId
-
-diskId=$(az disk show -n $diskName -g $resourceGroupName --query [id] -o tsv)
-
-az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremental]" -g $resourceGroupName --output table
-```
-
-#### CLI - Individual snapshot
-
-You can also check the status of an individual snapshot by checking the `CompletionPercent` property. Replace `$sourceSnapshotName` with the name of your snapshot then run the following command. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
-
-```azurecli
-az snapshot show -n $sourceSnapshotName -g $resourceGroupName --query [completionPercent] -o tsv
-```
-
-### PowerShell
-
-You have two options for getting the status of snapshots. You can either get a [list of all incremental snapshots associated with a particular disk](#powershelllist-incremental-snapshots) and their respective status, or you can get the [status of an individual snapshot](#powershellindividual-snapshots).
-
-#### PowerShell - List incremental snapshots
-
-The following script returns a list of all incremental snapshots associated with a particular disk that haven't completed their background copy. Replace `yourResourceGroupNameHere` and `yourDiskNameHere`, then run the script.
-
-```azurepowershell
-$resourceGroupName = "yourResourceGroupNameHere"
-$snapshots = Get-AzSnapshot -ResourceGroupName $resourceGroupName
-$diskName = "yourDiskNameHere"
-
-$yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName
-
-$incrementalSnapshots = New-Object System.Collections.ArrayList
-
-foreach ($snapshot in $snapshots)
-{
- if($snapshot.Incremental -and $snapshot.CreationData.SourceResourceId -eq $yourDisk.Id -and $snapshot.CreationData.SourceUniqueId -eq $yourDisk.UniqueId)
- {
- $targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
- {
- if($targetSnapshot.CompletionPercent -lt 100)
- {
- $incrementalSnapshots.Add($targetSnapshot)
- }
- }
- }
-}
-
-$incrementalSnapshots
-```
-
-#### PowerShell - individual snapshots
-
-You can check the `CompletionPercent` property of an individual snapshot to get its status. Replace `yourResourceGroupNameHere` and `yourSnapshotName` then run the script. The value of the property must be 100 before you can use the snapshot for restoring disk or generate a SAS URI for downloading the underlying data.
-
-```azurepowershell
-$resourceGroupName = "yourResourceGroupNameHere"
-$snapshotName = "yourSnapshotName"
-
-$targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
-
-$targetSnapshot.CompletionPercent
-```
-
-### Check disk creation status
-
-When creating a disk from either a Premium SSD v2 or an Ultra Disk snapshot, you must wait for the background copy process to complete before you can attach it. Currently, you must use the Azure CLI to check the progress of the copy process.
-
-The following script gives you the status of an individual disk's copy process. The value of `completionPercent` must be 100 before the disk can be attached.
-
-```azurecli
-subscriptionId=yourSubscriptionID
-resourceGroupName=yourResourceGroupName
-diskName=yourDiskName
-
-az account set --subscription $subscriptionId
-
-az disk show -n $diskName -g $resourceGroupName --query [completionPercent] -o tsv
-```
- ## Check sector size Snapshots with a 4096 logical sector size can only be used to create Premium SSD v2 or Ultra Disks. They can't be used to create other disk types. Snapshots of disks with 4096 logical sector size are stored as VHDX, whereas snapshots of disks with 512 logical sector size are stored as VHD. Snapshots inherit the logical sector size from the parent disk.
az snapshot show -g resourcegroupname -n snapshotname --query [creationData.logi
See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions.
-If you have additional questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
+If you have more questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ.
If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots).
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 07/12/2023 Last updated : 08/17/2023
To deploy a Premium SSD v2, see [Deploy a Premium SSD v2](disks-deploy-premium-v
## Premium SSDs
-Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs only supports 512E sector size.
+Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)).
To learn more about individual Azure VM types and sizes for Windows or Linux, including size compatibility for premium storage, see [Sizes for virtual machines in Azure](sizes.md). You'll need to check each individual VM size article to determine if it's premium storage-compatible.
For Premium SSDs, each I/O operation less than or equal to 256 kB of throughput
## Standard SSDs
-Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSD only supports 512E sector size.
+Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)).
### Standard SSD size
Standard SSDs offer disk bursting, which provides better tolerance for the unpre
## Standard HDDs
-Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs only supports 512E sector size.
+Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)).
### Standard HDD size [!INCLUDE [disk-storage-standard-hdd-sizes](../../includes/disk-storage-standard-hdd-sizes.md)]
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
The memory-optimized Ebsv5 and Ebdsv5 Azure virtual machine (VM) series deliver higher remote storage performance in each VM size than the [Ev4 series](ev4-esv4-series.md). The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads. For example, relational databases and data analytics applications.
-The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
+The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. We recommend choosing Premium SSD, Premium SSD v2 or Ultra disks to attain the published disk performance.
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
- SCSI Interface: Supported on Generation 1 and 2 VMs ## Ebdsv5 Series (SCSI)
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | | Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
-| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 4 | 12500 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 |
| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 | | Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 | | Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 | | Standard_E96bds_v5 | 96 | 672 | 3600 | 32 | 450000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 25000 | ## Ebdsv5 Series (NVMe)
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- NVMe Interface: Supported only on Generation 2 VMs - SCSI Interface: Supported on Generation 1 and Generation 2 VMs ## Ebsv5 Series (SCSI)
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
| Standard_E96bs_v5 | 96 | 672 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 25000 | ## Ebsv5 Series (NVMe)
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
az feature register --namespace Microsoft.VirtualMachineImages --name MooncakePu
## OS support
-VM Image Builder supports the following Azure Marketplace base operating system images:
-- Ubuntu 18.04-- Ubuntu 16.04-- RHEL 7.6, 7.7-- CentOS 7.6, 7.7-- SLES 12 SP4-- SLES 15, SLES 15 SP1-- Windows 10 RS5 Enterprise/Enterprise multi-session/Professional-- Windows 2016-- Windows 2019-- CBL-Mariner-
->[!IMPORTANT]
-> These operating systems have been tested and now work with VM Image Builder. However, VM Image Builder should work with any Linux or Windows image in the marketplace.
+VM Image Builder is designed to work with all Azure Marketplace base operating system images.
++ > [!NOTE] > You can now use the Azure Image Builder service inside the portal as of March 2023. [Get started](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal.
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
Last updated 01/04/2023--+ # Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release) for Linux VMs
If you would like to use certificate authentication and wrap the encryption key
## Next steps
-[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md)
+[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md)
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 01/03/2023 Last updated : 08/16/2023
sourceDiskSizeBytes=$(az disk show -g $sourceRG -n $sourceDiskName --query '[dis
az disk create -g $targetRG -n $targetDiskName -l $targetLocation --os-type $targetOS --for-upload --upload-size-bytes $(($sourceDiskSizeBytes+512)) --sku standard_lrs
-targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 -o tsv)
+targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 --query [accessSas] -o tsv)
sourceSASURI=$(az disk grant-access -n $sourceDiskName -g $sourceRG --duration-in-seconds 86400 --query [accessSas] -o tsv)
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
The `customization.log` file includes the following stages:
- Ensure that Azure Policy and Firewall allow connectivity to remote resources. - Output comments to the console by using `Write-Host` or `echo`. Doing so lets you search the *customization.log* file. + ## Troubleshoot common build errors
+### The template deployment failed because of policy violation
+
+#### Error
+
+```text
+{
+ "statusCode": "BadRequest",
+ "serviceRequestId": null,
+ "statusMessage": "{\"error\":{\"code\":\"InvalidTemplateDeployment\",\"message\":\"The template deployment failed because of policy violation. Please see details for more information.\",\"details\":[{\"code\":\"RequestDisallowedByPolicy\",\"target\":\"<target_name>\",\"message\":\"Resource '<resource_name>' was disallowed by policy. Policy identifiers: '[{\\\"policyAssignment\\\":{\\\"name\\\":\\\"[Initiative] KeyVault (Microsoft.KeyVault)\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policyAssignments/Microsoft.KeyVault\\\"},\\\"policyDefinition\\\":{\\\"name\\\":\\\"Azure Key Vault should disable public network access\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policyDefinitions/KeyVault.disablePublicNetworkAccess_deny_deny\\\"},\\\"policySetDefinition\\\":{\\\"name\\\":\\\"[Initiative] KeyVault (Microsoft.KeyVault)\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policySetDefinitions/Microsoft.KeyVault\\\"}}]'.\",\"additionalInfo\":[{\"type\":\"PolicyViolation\"}]}]}}",
+ "eventCategory": "Administrative",
+ "entity": "/subscriptions/<subscription_ID>/<resourcegroups>/<resourcegroupname>/providers/Microsoft.Resources/deployments/<deployment_name>",
+ "message": "Microsoft.Resources/deployments/validate/action",
+ "hierarchy": "<subscription_ID>/<resourcegroupname>/<policy_name>/<managementGroup_name>/<deployment_ID>"
+}
+```
+
+#### Cause
+
+The above policy violation error is a result of using an Azure Key Vault with public access disabled. At this time, Azure Image Builder doesn't support this configuration.
+
+#### Solution
+
+The Azure Key Vault must be created with public access enabled.
+ ### Packer build command failure #### Error
virtual-machines Migration Classic Resource Manager Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md
This article catalogs the most common errors and mitigations during the migratio
| Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it's a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, remove the web/worker role from the deployment and try migration again. | | Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, don't attempt any self-mitigation as this might have unintended consequences on your environment. | | The virtual network {virtual-network-name} doesn't exist. |This can happen if you created the Virtual Network in the new Azure portal. The actual Virtual Network name follows the pattern "Group * \<VNET name>" |
-| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. If these extensions are left installed on the virtual machine, they're automatically uninstalled before completing the migration. |
+| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |**NOTE:** Error Message is in processs of getting updated, moving forward <b>it is required to uninstall the extension before the migration</b> XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. |
| VM {vm-name} in HostedService {hosted-service-name} contains Extension VMSnapshot/VMSnapshotLinux, which is currently not supported for Migration. Uninstall it from the VM and add it back using Azure Resource Manager after the Migration is Complete |This is the scenario where the virtual machine is configured for Azure Backup. Since this is currently an unsupported scenario, follow the workaround at https://aka.ms/vmbackupmigration | | VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} whose Status isn't being reported from the VM. Hence, this VM can't be migrated. Ensure that the Extension status is being reported or uninstall the extension from the VM and retry migration. <br><br> VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} reporting Handler Status: {handler-status}. Hence, the VM can't be migrated. Ensure that the Extension handler status being reported is {handler-status} or uninstall it from the VM and retry migration. <br><br> VM Agent for VM {vm-name} in HostedService {hosted-service-name} is reporting the overall agent status as Not Ready. Hence, the VM may not be migrated, if it has a migratable extension. Ensure that the VM Agent is reporting overall agent status as Ready. Refer to https://aka.ms/classiciaasmigrationfaqs. |Azure guest agent & VM Extensions need outbound internet access to the VM storage account to populate their status. Common causes of status failure include <li> a Network Security Group that blocks outbound access to the internet <li> If the VNET has on premises DNS servers and DNS connectivity is lost <br><br> If you continue to see an unsupported status, you can uninstall the extensions to skip this check and move forward with migration. | | Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it has multiple Availabilities Sets. |Currently, only hosted services that have 1 or less Availability sets can be migrated. To work around this problem, move the additional availability sets, and Virtual machines in those availability sets, to a different hosted service. |
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
An individual VM restore point is a resource that stores VM configuration and po
VM restore points supports both application consistency and crash consistency (in preview). Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency.
-Crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a Virtual Machine. This is same as the status of data in the VM after a power outage or a crash. "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
+Multi-disk crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a virtual machine. This is the same as the status of data in the VM after a power outage or a crash. The "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
+
+> [!NOTE]
+> For disks configured with read/write host caching, multi-disk crash consistency can't be guaranteed because writes occurring while the snapshot is taken might not have been acknowledged by Azure Storage. If maintaining consistency is crucial, we advise using the application consistency mode.
VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Management resource that contains the restore points for a specific VM. If you want to utilize ARM templates for creating restore points and restore point collections, visit the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository on GitHub.
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
Last updated 01/04/2023---+ # Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)
If you would like to use certificate authentication and wrap the encryption key
## Next steps
-[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)
+[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)
virtual-machines Ubuntu Pro In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md
**Applies to:** :heavy_check_mark: Linux virtual machines
-Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of Ubuntu 18.04 LTS going EOL to Ubuntu Pro. [Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)....](https://ubuntu.com/18-04/azure) Canonical no longer provides technical support, software updates, or security patches for this version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS.
-
-## What's Ubuntu Pro
-Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The secure use of open-source software allows teams to utilize the latest technologies while meeting internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional 24/7 phone and ticket support.
-
-Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI.
-
-## Why developers and devops choose Ubuntu Pro for Azure
-* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt)
-* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and help you meet the Azure Linux Security Baseline policy)
+Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure
+Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of
+Ubuntu 18.04 LTS going EOL to Ubuntu Pro.
+[Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)](https://ubuntu.com/18-04/azure).
+Canonical no longer provides technical support, software updates, or security patches for this
+version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS.
+
+## What is Ubuntu Pro?
+
+Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The
+secure use of open-source software allows teams to utilize the latest technologies while meeting
+internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with
+Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and
+management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS
+is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu
+packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional
+24/7 phone and ticket support.
+
+Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive
+security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI.
+
+## Why developers and devops choose Ubuntu Pro for Azure
+
+* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and
+ PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt)
+* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and
+ help you meet the Azure Linux Security Baseline policy)
* FIPS 140-2 certified modules
-* Common Criteria (CC) EAL2 provisioning packages
-* Kernel Live patch: kernel patches delivered immediately, without the need to reboot
-* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance and advanced device support
-* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028
-* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads
-* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools to innovate with the latest technologies
-* Non-stop security: Canonical publishes images frequently, ensuring security is present from the moment an instance launches
-* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go across regions or out to the Internet for updates
-* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same experience regardless of the platform. It ensures consistency of your CI/CD pipelines and management mechanisms.
-
-**This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs:**
-
-1. Converting to Ubuntu Pro license
-
-2. Validating the license
+* Common Criteria (CC) EAL2 provisioning packages
+* Kernel Live patch: kernel patches delivered immediately, without the need to reboot
+* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance
+ and advanced device support
+* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028
+* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads
+* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools
+ to innovate with the latest technologies
+* Non-stop security: Canonical publishes images frequently, ensuring security is present from the
+ moment an instance launches
+* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go
+ across regions or out to the Internet for updates
+* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same
+ experience regardless of the platform. It ensures consistency of your CI/CD pipelines and
+ management mechanisms.
+
+> [!NOTE]
+> This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to
+> Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs:
+>
+> 1. Converting to Ubuntu Pro license
+> 2. Validating the license
+>
+> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running
+> detach. Open a support ticket for any exceptions.
+
+## Convert to Ubuntu Pro using the Azure CLI
->[!NOTE]
-> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running detach. Open a support ticket for any exceptions.
-
-## Convert to Ubuntu Pro using the Azure CLI
```azurecli-interactive # The following will enable Ubuntu Pro on a virtual machine
-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
+az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
```
-```In-VM commands
+```In-VM commands
# The next step is to execute two in-VM commands
-sudo apt install ubuntu-advantage-tools
-sudo pro auto-attach
+sudo apt install ubuntu-advantage-tools
+sudo pro auto-attach
```
-(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28)
-## Validate the license
+(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28)
+
+## Validate the license
+ Expected output: ![Screenshot of the expected output.](./expected-output.png) ## Create an Ubuntu Pro VM using the Azure CLI+ You can also create a new VM using the Ubuntu Server images and apply Ubuntu Pro at create time. For example: ```azurecli-interactive # The following will enable Ubuntu Pro on a virtual machine
-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
+az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
``` ```In-VM commands # The next step is to execute two in-VM commands
-sudo apt install ubuntu-advantage-tools
-sudo pro auto-attach
+sudo apt install ubuntu-advantage-tools
+sudo pro auto-attach
``` >[!NOTE] > For systems with advantage tools version 28 or higher installed the system will perform a pro attach during a reboot. ## Check licensing model using the Azure CLI+ You can use the az vm get-instance-view command to check the status. Look for a licenseType field in the response. If the licenseType field exists and the value is UBUNTU_PRO, your virtual machine has Ubuntu Pro enabled. ```Azure CLI
-az vm get-instance-view -g MyResourceGroup -n MyVm
+az vm get-instance-view -g MyResourceGroup -n MyVm
``` ## Check the licensing model of an Ubuntu Pro enabled VM using Azure Instance Metadata Service+ From within the virtual machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the virtual machine's licenseType value. A licenseType value of UBUNTU_PRO indicates that your virtual machine has Ubuntu Pro enabled. [Learn more about attested metadata](../../instance-metadata-service.md). ## Billing
-You are charged for Ubuntu Pro as part of the Preview. Visit the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro pricing. To cancel the Pro subscription during the preview period, open a support ticket through the Azure portal.
+
+You are charged for Ubuntu Pro as part of the Preview. Visit the
+[pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro
+pricing. To cancel the Pro subscription during the preview period, open a support ticket through the
+Azure portal.
## Frequently Asked Questions
-#### I launched an Ubuntu Pro VM. Do I need to configure it or enable something else?
-With the availability of outbound internet access, Ubuntu Pro automatically enables premium features such as Extended Security Maintenance for [Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and [live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required (for example CIS), check the using 'usg' to [harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview) tutorial. Should you require FIPS, check enabling FIPS tutorials.
+### What are the next step after launching an Ubuntu Pro VM?
+
+With the availability of outbound internet access, Ubuntu Pro automatically enables premium features
+such as Extended Security Maintenance for
+[Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and
+[live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required
+(for example CIS), check the using 'usg' to
+[harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview)
+tutorial. Should you require FIPS, check enabling FIPS tutorials.
-For more information about networking requirements for making sure Pro enablement process works (such as egress traffic, endpoints and ports) [check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html).
+For more information about networking requirements for making sure Pro enablement process works
+(such as egress traffic, endpoints and ports)
+[check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html).
+
+### Does shutting down the machine stop billing?
-#### If I shut down the machine, does the billing continue?
If you launch Ubuntu Pro from Azure Marketplace you pay as you go, so, if you donΓÇÖt have any machine running, you wonΓÇÖt pay anything additional.
-#### Can I get volume discounts?
+### Are there volume discounts?
+ Yes. Contact your Microsoft sales representative.
-#### Are Reserved Instances available?
+### Are Reserved Instances available?
+ Yes
-#### If the customer doesn't do the auto attach will they still get attached to pro on reboot?
-If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot. However, this applies only if they have v28 of the Pro client.
+### If the customer doesn't do the auto attach will they still get attached to pro on reboot?
+
+If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot.
+However, this applies only if they have v28 of the Pro client.
+ * For Jammy and Focal, this process works as expected. * For Bionic and Xenial this process doesn't work due to the older versions of the Pro client installed.
virtual-machines Configure Oracle Asm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-asm.md
Complete following steps to setup Oracle ASM.
3. In the **Create Disk Group** dialog box: 1. Enter the disk group name **FRA**.
- 2. Under **Select Member Disks**, select **/dev/oracleasm/disks/VOL2**
- 3. Under **Allocation Unit Size**, select **4**.
- 4. Click **ok** to create the disk group.
- 5. Click **ok** to close the confirmation window.
+ 2. For Redundancy option, select External (None).
+ 3. Under **Select Member Disks**, select **/dev/oracleasm/disks/VOL2**
+ 4. Under **Allocation Unit Size**, select **4**.
+ 5. Click **ok** to create the disk group.
+ 6. Click **ok** to close the confirmation window.
:::image type="content" source="./media/oracle-asm/asm-config-assistant-02.png" alt-text="Screenshot of the Create Disk Group dialog box.":::
virtual-machines Oracle Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md
You can also implement high availability and disaster recovery for Oracle Databa
We recommend placing the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. If you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway. To walk through the basic setup procedure on Azure, see Implement Oracle Data Guard on an Azure Linux virtual machine.
-With Oracle Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see Active Data Guard and GoldenGate. If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+With Oracle Active Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard and GoldenGate](https://www.oracle.com/docs/tech/database/oow14-con7715-adg-gg-bestpractices.pdf). If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+ To walk through the basic setup procedure on Azure, see [Implement Oracle Golden Gate on an Azure Linux VM](configure-oracle-golden-gate.md). In addition to having a high availability and disaster recovery solution architected in Azure, you should have a backup strategy in place to restore your database.
Different [backup strategies](oracle-database-backup-strategies.md) are availabl
- Using [Azure backup](oracle-database-backup-azure-backup.md) - Using [Oracle RMAN Streaming data](oracle-rman-streaming-backup.md) backup ## Deploy Oracle applications on Azure
-Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](/azure/developer/terraform).
+Use Terraform templates, AZ CLI, or the Azure Portal to set up Azure infrastructure and install Oracle applications. You also use Ansible to configure DB inside the VM. For more information, see [Terraform on Azure](/azure/developer/terraform).
Oracle has certified the following applications to run in Azure when connecting to an Oracle database by using the Azure with Oracle Cloud interconnect solution: - E-Business Suite
You can deploy custom applications in Azure that connect with OCI and other Azur
According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are supported on any public cloud offering that meets their specific Minimum Technical Requirements (MTR). You need to create custom images that meet their MTR specifications for operating system and software application compatibility. For more information, see [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html). ## Licensing Deployment of Oracle solutions in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle.
-Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
+Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. For more information, see [Oracle Processor Core Factor Table](https://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf). Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](/azure/virtual-machines/constrained-vcpu) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/). ## Next steps
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
Here are some scenarios where security admin rules can be used:
| **Enforcing application-level security** | Security admin rules can be used to enforce application-level security by blocking traffic to or from specific applications or services. | With Azure Virtual Network Manager, you have a centralized location to manage security admin rules. Centralization allows you to define security policies at scale and apply them to multiple virtual networks at once.+
+> [!NOTE]
+> Currently, security admin rules do not apply to private endpoints that fall under the scope of a managed virtual network.
+ ## How do security admin rules work? Security admin rules allow or deny traffic on specific ports, protocols, and source/destination IP prefixes in a specified direction. When you define a security admin rule, you specify the following conditions:
Security admin rules allow or deny traffic on specific ports, protocols, and sou
- The protocol to be used To enforce security policies across multiple virtual networks, you [create and deploy a security admin configuration](how-to-block-network-traffic-portal.md). This configuration contains a set of rule collections, and each rule collection contains one or more security admin rules. Once created, you associate the rule collection with the network groups requiring security admin rules. The rules are then applied to all virtual networks contained in the network groups when the configuration is deployed. A single configuration provides a centralized and scalable enforcement of security policies across multiple virtual networks.+ ### Evaluation of security admin rules and network security groups (NSGs) Security admin rules and network security groups (NSGs) can be used to enforce network security policies in Azure. However, they have different scopes and priorities.
virtual-network-manager Concept Virtual Network Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-virtual-network-flow-logs.md
+
+ Title: Monitoring security admin rules with Virtual Network Flow Logs
+description: This article covers using Network Watcher and Virtual Network Flow Logs to monitor traffic through security admin rules in Azure Virtual Network Manager.
++++ Last updated : 08/11/2023++
+# Monitoring Azure Virtual Network Manager with VNet flow logs (Preview)
+
+Monitoring traffic is critical to understanding how your network is performing and to troubleshoot issues. Administrators can utilize VNet flow logs (Preview) to show whether traffic is flowing through or blocked on a VNet by a [security admin rule]. VNet flow logs (Preview) are a feature of Network Watcher.
+
+Learn more about [VNet flow logs (Preview)](../network-watcher/vnet-flow-logs-overview.md) including usage and how to enable.
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
+>
+> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Enable VNet flow logs (Preview)
+
+Currently, you need to enable Virtual Network flow logs (Preview) on each VNet you want to monitor. You can enable Virtual Network Flow Logs on a VNet by using [PowerShell](../network-watcher/vnet-flow-logs-powershell.md) or the [Azure CLI](../network-watcher/vnet-flow-logs-cli.md).
+
+Here's an example of a flow log
+
+```json
+{
+ "records": [
+ {
+ "time": "2022-09-14T09:00:52.5625085Z",
+ "flowLogVersion": 4,
+ "flowLogGUID": "a1b2c3d4-e5f6-g7h8-i9j0-k1l2m3n4o5p6",
+ "macAddress": "00224871C205",
+ "category": "FlowLogFlowEvent",
+ "flowLogResourceID": "/SUBSCRIPTIONS/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG",
+ "targetResourceID": "/subscriptions/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet01",
+ "operationName": "FlowLogFlowEvent",
+ "flowRecords": {
+ "flows": [
+ {
+ "aclID": "9a8b7c6d-5e4f-3g2h-1i0j-9k8l7m6n5o4p3",
+ "flowGroups": [
+ {
+ "rule": "DefaultRule_AllowInternetOutBound",
+ "flowTuples": [
+ "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0",
+ "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580",
+ "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0",
+ "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108",
+ "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0",
+ "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466"
+ ]
+ }
+ ]
+ },
+ {
+ "aclID": "b1c2d3e4-f5g6-h7i8-j9k0-l1m2n3o4p5q6",
+ "flowGroups": [
+ {
+ "rule": "BlockHighRiskTCPPortsFromInternet",
+ "flowTuples": [
+ "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0",
+ "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0"
+ ]
+ },
+ {
+ "rule": "Internet",
+ "flowTuples": [
+ "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0",
+ "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0",
+ "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0",
+ "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0",
+ "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0",
+ "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0",
+ "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
++
+## Next steps
+> [!div class="nextstepaction"]
+> Learn more about [VNet Flow Logs](../network-watcher/vnet-flow-logs-overview.md) and how to use them.
+> Learn more about [Event log options for Azure Virtual Network Manager](concept-event-logs.md).
virtual-network-manager Create Virtual Network Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-template.md
Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template'
-description: In this article, you create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template, ARM template.
+ Title: 'Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template'
+description: In this article, you deploy various network topologies with Azure Virtual Network Manager using Azure Resource Manager template(ARM template).
-# Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template -ARM template
+# Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template
Get started with Azure Virtual Network Manager by using Azure Resource Manager templates to manage connectivity for all your virtual networks.
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Yes,
In Azure, VNet peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While VNet peering works by creating a 1:1 mapping between each peered VNet, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each VNet without the need for individual peering relationships.
+### Do security admin rules apply to Azure Private Endpoints?
+
+Currently, security admin rules don't apply to Azure Private Endpoints that fall under the scope of a virtual network managed by Azure Virtual Network Manager.
### How can I explicitly allow Azure SQL Managed Instance traffic before having deny rules? Azure SQL Managed Instance has some network requirements. If your security admin rules can block the network requirements, you can use the below sample rules to allow SQLMI traffic with higher priority than the deny rules that can block the traffic of SQL Managed Instance.
virtual-network Accelerated Networking Mana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-overview.md
Several [Azure Marketplace](https://learn.microsoft.com/marketplace/azure-market
We recommend using an operating system with support for MANA to maximize performance. In instances where the operating system doesn't or can't support MANA, network connectivity is provided through the hypervisorΓÇÖs virtual switch. The virtual switch is also used during some infrastructure servicing events where the Virtual Function (VF) is revoked. ### Using DPDK
-Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers.
-
-DPDK requires the following set of drivers:
-1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later)
-1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later)
-1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later)
-1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later)
-
-DPDK only functions on Linux VMs.
+For information about DPDK on MANA hardware, see [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
## Evaluating performance Differences in VM SKUs, operating systems, applications, and tuning parameters can all affect network performance on Azure. For this reason, we recommend that you benchmark and test your workloads to ensure you achieve the expected network performance.
virtual-network Setup Dpdk Mana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk-mana.md
+
+ Title: Microsoft Azure Network Adapter (MANA) and DPDK on Linux
+description: Learn about MANA and DPDK for Linux Azure VMs.
+++ Last updated : 07/10/2023+++
+# Microsoft Azure Network Adapter (MANA) and DPDK on Linux
+
+The Microsoft Azure Network Adapter (MANA) is new hardware for Azure virtual machines to enables higher throughput and reliability.
+To make use of MANA, users must modify their DPDK initialization routines. MANA requires two changes compared to legacy hardware:
+- [MANA EAL arguments](#mana-dpdk-eal-arguments) for the poll-mode driver (PMD) differ from previous hardware.
+- The Linux kernel must release control of the MANA network interfaces before DPDK initialization begins.
+
+The setup procedure for MANA DPDK is outlined in the [example code.](#example-testpmd-setup-and-netvsc-test).
+
+## Introduction
+
+Legacy Azure Linux VMs rely on the mlx4 or mlx5 drivers and the accompanying hardware for accelerated networking. Azure DPDK users would select specific interfaces to include or exclude by passing bus addresses to the DPDK EAL. The setup procedure for MANA DPDK differs slightly, since the assumption of one bus address per Accelerated Networking interface no longer holds true. Rather than using a PCI bus address, the MANA PMD uses the MAC address to determine which interface it should bind to.
+
+## MANA DPDK EAL Arguments
+The MANA PMD probes all devices and ports on the system when no `--vdev` argument is present; the `--vdev` argument is not mandatory. In testing environments it's often desirable to leave one (primary) interface available for servicing the SSH connection to the VM. To use DPDK with a subset of the available VFs, users should pass both the bus address of the MANA device and the MAC address of the interfaces in the `--vdev` argument. For more detail, example code is available to demonstrate [DPDK EAL initialization on MANA](#example-testpmd-setup-and-netvsc-test).
+
+For general information about the DPDK Environment Abstraction Layer (EAL):
+- [DPDK EAL Arguments for Linux](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#eal-in-a-linux-userland-execution-environment)
+- [DPDK EAL Overview](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html)
+
+## DPDK requirements for MANA
+
+Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers.
+
+MANA DPDK requires the following set of drivers:
+1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later)
+1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later)
+1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later)
+1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later)
+
+>[!NOTE]
+>MANA DPDK is not available for Windows; it will only work on Linux VMs.
+
+## Example: Check for MANA
+
+>[!NOTE]
+>This article assumes the pciutils package containing the lspci command is installed on the system.
+
+```bash
+# check for pci devices with ID:
+# vendor: Microsoft Corporation (1414)
+# class: Ethernet Controller (0200)
+# device: Microsft Azure Network Adapter VF (00ba)
+if [[ -n `lspci -d 1414:00ba:0200` ]]; then
+ echo "MANA device is available."
+else
+ echo "MANA was not detected."
+fi
+
+```
+
+## Example: DPDK installation (Ubuntu 22.04)
+
+>[!NOTE]
+>This article assumes compatible kernel and rdma-core are installed on the system.
+
+```bash
+DEBIAN_FRONTEND=noninteractive sudo apt-get install -q -y build-essential libudev-dev libnl-3-dev libnl-route-3-dev ninja-build libssl-dev libelf-dev python3-pip meson libnuma-dev
+
+pip3 install pyelftools
+
+# Try latest LTS DPDK, example uses DPDK tag v23.07-rc3
+git clone https://github.com/DPDK/dpdk.git -b v23.07-rc3 --depth 1
+pushd dpdk
+meson build
+cd build
+ninja
+sudo ninja install
+popd
+```
+
+## Example: Testpmd setup and netvsc test
+
+Note the following example code for running DPDK with MANA. The direct-to-vf 'netvsc' configuration on Azure is recommended for maximum performance with MANA.
+
+>[!NOTE]
+>DPDK requires either 2MB or 1GB hugepages to be enabled
+
+```bash
+# Enable 2MB hugepages.
+echo 1024 | tee /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
+
+# Assuming use of eth1 for DPDK in this demo
+PRIMARY="eth1"
+
+# $ ip -br link show master eth1
+# > enP30832p0s0 UP f0:0d:3a:ec:b4:0a <... # truncated
+# grab interface name for device bound to primary
+SECONDARY="`ip -br link show master $PRIMARY | awk '{ print $1 }'`"
+# Get mac address for MANA interface (should match primary)
+MANA_MAC="`ip -br link show master $PRIMARY | awk '{ print $3 }'`"
++
+# $ ethtool -i enP30832p0s0 | grep bus-info
+# > bus-info: 7870:00:00.0
+# get MANA device bus info to pass to DPDK
+BUS_INFO="`ethtool -i $SECONDARY | grep bus-info | awk '{ print $2 }'`"
+
+# Set MANA interfaces DOWN before starting DPDK
+ip link set $PRIMARY down
+ip link set $SECONDARY down
++
+## Move synthetic channel to user mode and allow it to be used by NETVSC PMD in DPDK
+DEV_UUID=$(basename $(readlink /sys/class/net/$PRIMARY/device))
+NET_UUID="f8615163-df3e-46c5-913f-f2d2f965ed0e"
+modprobe uio_hv_generic
+echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
+echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind
+echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind
+
+# MANA single queue test
+dpdk-testpmd -l 1-3 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --txd=128 --rxd=128 --stats 2
+
+# MANA multiple queue test (example assumes > 9 cores)
+dpdk-testpmd -l 1-9 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --nb-cores=8 --txd=128 --rxd=128 --txq=8 --rxq=8 --stats 2
+
+```
+
+## Troubleshooting
+
+### Fail to set interface down.
+Failure to set the MANA bound device to DOWN can result in low or zero packet throughput.
+The failure to release the device can result the EAL error message related to transmit queues.
+```
+mana_start_tx_queues(): Failed to create qp queue index 0
+mana_dev_start(): failed to start tx queues -19
+```
+
+### Failure to enable huge pages.
+
+Try enabling huge pages and ensuring the information is visible in meminfo.
+```
+EAL: No free 2048 kB hugepages reported on node 0
+EAL: FATAL: Cannot get hugepage information.
+EAL: Cannot get hugepage information.
+EAL: Error - exiting with code: 1
+Cause: Cannot init EAL: Permission denied
+```
+
+### Low throughput with use of --vdev="net_vdev_netvsc0,iface=eth1"
+
+Failover configuration of either the `net_failsafe` or `net_vdev_netvsc` poll-mode-drivers isn't recommended for high performance on Azure. The netvsc configuration with DPDK version 20.11 or higher may give better results. For optimal performance, ensure your Linux kernel, rdma-core, and DPDK packages meet the listed requirements for DPDK and MANA.
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
DPDK consists of sets of user-space libraries that provide access to lower-level
DPDK can run on Azure virtual machines that are supporting multiple operating system distributions. DPDK provides key performance differentiation in driving network function virtualization implementations. These implementations can take the form of network virtual appliances (NVAs), such as virtual routers, firewalls, VPNs, load balancers, evolved packet cores, and denial-of-service (DDoS) applications.
+A list of setup instructions for DPDK on MANA VMs is available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
+ ## Benefit **Higher packets per second (PPS)**: Bypassing the kernel and taking control of packets in the user space reduces the cycle count by eliminating context switches. It also improves the rate of packets that are processed per second in Azure Linux virtual machines.
The following distributions from the Azure Marketplace are supported:
The noted versions are the minimum requirements. Newer versions are supported too.
+A list of requirements for DPDK on MANA VMs is available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
+ **Custom kernel support** For any Linux kernel version that's not listed, see [Patches for building an Azure-tuned Linux kernel](https://github.com/microsoft/azure-linux-kernel). For more information, you can also contact [aznetdpdk@microsoft.com](mailto:aznetdpdk@microsoft.com).
In addition, DPDK uses RDMA verbs to create data queues on the Network Adapter.
## Install DPDK manually (recommended)
+DPDK installation instructions for MANA VMs are available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md)
+ ### Install build dependencies # [RHEL, CentOS](#tab/redhat)
virtual-wan Virtual Wan Expressroute About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-about.md
This article provides details on ExpressRoute connections in Azure Virtual WAN.
A virtual hub can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Users using private connectivity in Virtual WAN can connect their ExpressRoute circuits to an ExpressRoute gateway in a Virtual WAN hub. For a tutorial on connecting an ExpressRoute circuit to an Azure Virtual WAN hub, see [How to Connect an ExpressRoute Circuit to Virtual WAN](virtual-wan-expressroute-portal.md). ## ExpressRoute circuit SKUs supported in Virtual WAN
-The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus).
+The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus). ExpressRoute Local circuits can only be connected to ExpressRoute gateways in the same region, but they can still access resources in spoke virtual networks located in other regions.
## ExpressRoute performance
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs
### How are Availability Zones and resiliency handled in Virtual WAN?
-Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there is resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
+Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there's resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
-Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There is currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall.
+Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There's currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall.
While the concept of Virtual WAN is global, the actual Virtual WAN resource is Resource Manager-based and deployed regionally. If the virtual WAN region itself were to have an issue, all hubs in that virtual WAN will continue to function as is, but the user won't be able to create new hubs until the virtual WAN region is available.
A Network Virtual Appliance (NVA) can be deployed inside a virtual hub. For step
No. The spoke VNet can't have a virtual network gateway if it's connected to the virtual hub.
+### Can a spoke VNet have an Azure Route Server?
+
+No. The spoke VNet can't have a Route Server if it's connected to the virtual WAN hub.
+ ### Is there support for BGP in VPN connectivity? Yes, BGP is supported. When you create a VPN site, you can provide the BGP parameters in it. This will imply that any connections created in Azure for that site will be enabled for BGP.
A simple configuration of one Virtual WAN with one hub and one vpnsite can be cr
### Can spoke VNets connected to a virtual hub communicate with each other (V2V Transit)?
-Yes. Standard Virtual WAN supports VNet-to-VNet transitive connectivity via the Virtual WAN hub that the VNets are connected to. In Virtual WAN terminology, we refer to these paths as "local Virtual WAN VNet transit" for VNets connected to a Virtual Wan hub within a single region, and "global Virtual WAN VNet transit" for VNets connected through multiple Virtual WAN hubs across two or more regions.
+Yes. Standard Virtual WAN supports VNet-to-VNet transitive connectivity via the Virtual WAN hub that the VNets are connected to. In Virtual WAN terminology, we refer to these paths as "local Virtual WAN VNet transit" for VNets connected to a Virtual WAN hub within a single region, and "global Virtual WAN VNet transit" for VNets connected through multiple Virtual WAN hubs across two or more regions.
In some scenarios, spoke VNets can also be directly peered with each other using [virtual network peering](../virtual-network/virtual-network-peering-overview.md) in addition to local or global Virtual WAN VNet transit. In this case, VNet Peering takes precedence over the transitive connection via the Virtual WAN hub.
If a virtual hub learns the same route from multiple remote hubs, the order in w
* **AS Path** 1. Prefer routes with the shortest BGP AS-Path length irrespective of the source of the route advertisements.
- Note: In vWANs with multiple remote virtual hubs, If there is a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred.
+ Note: In vWANs with multiple remote virtual hubs, If there's a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred.
2. Prefer routes from local virtual hub connections over routes learned from remote virtual hub. 3. If there are routes from both ExpressRoute and Site-to-site VPN connections:
Transit between ER-to-ER is always via Global reach. Virtual hub gateways are de
### Is there a concept of weight in Azure Virtual WAN ExpressRoute circuits or VPN connections
-When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There is no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub.
+When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There's no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub.
### Does Virtual WAN prefer ExpressRoute over VPN for traffic egressing Azure
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?
-The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It is recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
### Can hubs be created in different resource groups in Virtual WAN?
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premises branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
-### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure Portal?
+### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?
-When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure Portal.
+When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure portal.
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case.
+Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, please open a support case.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
There are several limitations with the virtual hub router upgrade
-* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
-
-* If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks. To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement.
+* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you'll have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you'll also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
-* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We are actively working on removing this limitation.
+* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We're actively working on removing this limitation.
* If your Virtual WAN hub is connected to more than 100 spoke virtual networks, then the upgrade may fail.
-If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
+If the update fails for any reason, your hub will be auto recovered to the old version to ensure there's still a working setup.
Additional things to note: * The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
-* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
+* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you'll need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
Previously updated : 09/15/2022 Last updated : 08/09/2023
The instructions you follow depend on the authentication method you want to use.
[!INCLUDE [Point to site page](../../includes/virtual-wan-p2s-gateway-include.md)] + ## <a name="download"></a>Generate client configuration files When you connect to VNet using User VPN (P2S), you can use the VPN client that is natively installed on the operating system from which you're connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients.
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
description: Learn how to set up an Azure AD tenant for P2S OpenVPN authenticati
Previously updated : 10/25/2022 Last updated : 08/18/2023
Assign the users to your applications.
1. Go to your Azure Active Directory and select **Enterprise applications**. 1. From the list, locate the application you just registered and click to open it.
-1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**, then **Save**.
+1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**.
+1. For **Assignment required**, change the value to **Yes**. For more information about this setting, see [Application properties](../active-directory/manage-apps/application-properties.md#enabled-for-users-to-sign-in).
+1. If you've made changes, click **Save** to save your settings.
1. In the left pane, click **Users and groups**. On the **Users and groups** page, click **+ Add user/group** to open the **Add Assignment** page. 1. Click the link under **Users and groups** to open the **Users and groups** page. Select the users and groups that you want to assign, then click **Select**. 1. After you finish selecting users and groups, click **Assign**.
In this step, you configure P2S Azure AD authentication for the virtual network
1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**.
- :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png":::
+ :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png":::
Configure the following values:
In this step, you configure P2S Azure AD authentication for the virtual network
For **Azure Active Directory** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values. * **Tenant**: `https://login.microsoftonline.com/{TenantID}`
- * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the ""Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
+ * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the "Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
* **Issuer**: `https://sts.windows.net/{TenantID}` For the Issuer value, make sure to include a trailing **/** at the end. 1. Once you finish configuring settings, click **Save** at the top of the page.
In this section, you generate and download the Azure VPN Client profile configur
## Next steps
-* * To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).--
vpn-gateway Vpn Gateway Classic Resource Manager Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md
Previously updated : 06/09/2023 Last updated : 08/21/2023
vpn-gateway Vpn Gateway Delete Vnet Gateway Classic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Delete a virtual network gateway using PowerShell (classic) This article helps you delete a VPN gateway in the classic (legacy) deployment model by using PowerShell. After the virtual network gateway has been deleted, modify the network configuration file to remove elements that you're no longer using.
-The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md).
+The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md)**.
## <a name="connect"></a>Step 1: Connect to Azure
vpn-gateway Vpn Gateway Delete Vnet Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-powershell.md
description: Learn how to delete a virtual network gateway using PowerShell. Previously updated : 04/29/2021 Last updated : 08/21/2023
Get the IP configurations of the virtual network gateway.
$GWIpConfigs = $Gateway.IpConfigurations ```
-Get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses.
+Get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses.
```powershell $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
Because this is a VNet-to-VNet configuration, you need the list of connections i
$ConnsL=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} ```
-In this example, we are checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway.
+In this example, we're checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway.
```powershell $ConnsR=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "<NameOfResourceGroup2>" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id}
Get the IP configurations of the virtual network gateway.
$GWIpConfigs = $Gateway.IpConfigurations ```
-Get the list of Public IP addresses used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses.
+Get the list of Public IP addresses used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses.
```powershell $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id}
Set-AzVirtualNetwork -VirtualNetwork $GWSub
## <a name="delete"></a>Delete a VPN gateway by deleting the resource group
-If you are not concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. The following steps apply only to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
+If you aren't concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. The following steps apply only to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md).
### 1. Get a list of all the resource groups in your subscription.
Find-AzResource -ResourceGroupNameContains RG1
### 3. Verify the resources in the list.
-When the list is returned, review it to verify that you want to delete all the resources in the resource group, as well as the resource group itself. If you want to keep some of the resources in the resource group, use the steps in the earlier sections of this article to delete your gateway.
+When the list is returned, review it to verify that you want to delete all the resources in the resource group, and the resource group itself. If you want to keep some of the resources in the resource group, use the steps in the earlier sections of this article to delete your gateway.
### 4. Delete the resource group and resources.
vpn-gateway Vpn Gateway Howto Point To Site Classic Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md
description: Learn how to create a classic Point-to-Site VPN Gateway connection
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Configure a Point-to-Site connection by using certificate authentication (classic)
-This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
+This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md)**.
You use a Point-to-Site (P2S) VPN gateway to create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location. When you have only a few clients that need to connect to a VNet, a P2S VPN is a useful solution to use instead of a Site-to-Site VPN. A P2S VPN connection is established by starting it from the client computer.
vpn-gateway Vpn Gateway Howto Site To Site Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Create a Site-to-Site connection using the Azure portal (classic)
-This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md).
+This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md)**.
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
vpn-gateway Vpn Gateway Howto Vnet Vnet Portal Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Configure a VNet-to-VNet connection (classic) This article helps you create a VPN gateway connection between virtual networks. The virtual networks can be in the same or different regions, and from the same or different subscriptions.
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
+The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).**
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-portal-classic/classic-diagram.png" alt-text="Diagram showing classic VNet-to-VNet architecture.":::
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md
Previously updated : 06/09/2023 Last updated : 08/21/2023 # Add a Site-to-Site connection to a VNet with an existing VPN gateway connection (classic) This article walks you through using PowerShell to add Site-to-Site (S2S) connections to a VPN gateway that has an existing connection using the classic (legacy) deployment model. This type of connection is sometimes referred to as a "multi-site" configuration. These steps don't apply to ExpressRoute/Site-to-Site coexisting connection configurations.
-The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md).
+The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)**.
[!INCLUDE [deployment models](../../includes/vpn-gateway-classic-deployment-model-include.md)]
vpn-gateway Vpn Gateway Peering Gateway Transit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-peering-gateway-transit.md
Previously updated : 11/09/2022 Last updated : 08/18/2023
This article helps you configure gateway transit for virtual network peering. [V
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png" alt-text="Diagram of Gateway transit." lightbox="./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png":::
-In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks. The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the classic deployment model.
+In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks.
+
+The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the legacy classic deployment model.
>
-In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks will propagate to the routing tables for the peered virtual networks using gateway transit. You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md).
+In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks propagate to the routing tables for the peered virtual networks using gateway transit.
+
+You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md).
-There are two scenarios in this article:
+There are two scenarios in this article. Select the scenario that applies to your environment. Most people use the **Same deployment model** scenario. If you aren't working with a classic deployment model VNet (legacy VNet) that already exists in your environment, you won't need to work with the **Different deployment models** scenario.
* **Same deployment model**: Both virtual networks are created in the Resource Manager deployment model.
-* **Different deployment models**: The spoke virtual network is created in the classic deployment model, and the hub virtual network and gateway are in the Resource Manager deployment model.
+* **Different deployment models**: The spoke virtual network is created in the classic deployment model, and the hub virtual network and gateway are in the Resource Manager deployment model. This scenario is useful when you need to connect a legacy VNet that already exists in the classic deployment model.
>[!NOTE] > If you make a change to the topology of your network and have Windows VPN clients, the VPN client package for Windows clients must be downloaded and installed again in order for the changes to be applied to the client.
There are two scenarios in this article:
## Prerequisites
-Before you begin, verify that you have the following virtual networks and permissions:
+This article requires the following VNets and permissions. If you aren't working with the different deployment model scenario, you don't need to create the classic VNet.
### <a name="vnet"></a>Virtual networks
-| VNet | Deployment model | Virtual network gateway |
-||--||
+| VNet | Configuration steps| Virtual network gateway|
+||||
| Hub-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | [Yes](tutorial-create-gateway-portal.md) | | Spoke-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | No | | Spoke-Classic | [Classic](vpn-gateway-howto-site-to-site-classic-portal.md#CreatVNet) | No |
Learn more about [built-in roles](../role-based-access-control/built-in-roles.md
## <a name="same"></a>Same deployment model
-In this scenario, the virtual networks are both in the Resource Manager deployment model. Use the following steps to create or update the virtual network peerings to enable gateway transit.
+This is the more common scenario. In this scenario, the virtual networks are both in the Resource Manager deployment model. Use the following steps to create or update the virtual network peerings to enable gateway transit.
### To add a peering and enable transit
-1. In the [Azure portal](https://portal.azure.com), create or update the virtual network peering from the Hub-RM. Navigate to the **Hub-RM** virtual network. Select **Peerings**, then **+ Add** to open **Add peering**.
+1. In the [Azure portal](https://portal.azure.com), create or update the virtual network peering from the Hub-RM. Go to the **Hub-RM** virtual network. Select **Peerings**, then **+ Add** to open **Add peering**.
1. On the **Add peering** page, configure the values for **This virtual network**. * Peering link name: Name the link. Example: **HubRMToSpokeRM** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**
- * Virtual network gateway: **Use this virtual network's gateway**
+ * Virtual network gateway: **Use this virtual network's gateway or Route Server**
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png" alt-text="Screenshot shows add peering." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png"::: 1. On the same page, continue on to configure the values for the **Remote virtual network**. * Peering link name: Name the link. Example: **SpokeRMtoHubRM**
- * Deployment model: **Resource Manager**
+ * Virtual network deployment model: **Resource Manager**
+ * I know my resource ID: Leave blank. You only need to select this if you don't have read access to the virtual network or subscription you want to peer with.
+ * Subscription: Select the subscription.
* Virtual Network: **Spoke-RM** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**
- * Virtual network gateway: **Use the remote virtual network's gateway**
+ * Virtual network gateway: **Use the remote virtual network's gateway or Route Server**
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-remote.png" alt-text="Screenshot shows values for remote virtual network." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-remote.png":::
In this scenario, the virtual networks are both in the Resource Manager deployme
### To modify an existing peering for transit
-If the peering was already created, you can modify the peering for transit.
+If you have an already existing peering, you can modify the peering for transit.
-1. Navigate to the virtual network. Select **Peerings** and select the peering that you want to modify.
-
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-modify.png" alt-text="Screenshot shows select peerings." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-modify.png":::
+1. Go to the virtual network. Select **Peerings** and select the peering that you want to modify. For example, on the Spoke-RM VNet, select the SpokeRMtoHubRM peering.
1. Update the VNet peering. * Traffic to remote virtual network: **Allow** * Traffic forwarded to virtual network; **Allow**
- * Virtual network gateway: **Use remote virtual network's gateway**
-
- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png" alt-text="Screenshot shows modify peering gateway." lightbox="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png":::
+ * Virtual network gateway or Route Server: **Use the remote virtual network's gateway or Route Server**
1. **Save** the peering settings. ### <a name="ps-same"></a>PowerShell sample
-You can also use PowerShell to create or update the peering with the example above. Replace the variables with the names of your virtual networks and resource groups.
+You can also use PowerShell to create or update the peering. Replace the variables with the names of your virtual networks and resource groups.
```azurepowershell-interactive $SpokeRG = "SpokeRG1"
In this configuration, the spoke VNet **Spoke-Classic** is in the classic deploy
For this configuration, you only need to configure the **Hub-RM** virtual network. You don't need to configure anything on the **Spoke-Classic** VNet.
-1. In the Azure portal, navigate to the **Hub-RM** virtual network, select **Peerings**, then select **+ Add**.
+1. In the Azure portal, go to the **Hub-RM** virtual network, select **Peerings**, then select **+ Add**.
1. On the **Add peering** page, configure the following values: * Peering link name: Name the link. Example: **HubRMToClassic** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**
- * Virtual network gateway: **Use this virtual network's gateway**
- * Remote virtual network: **Classic**
+ * Virtual network gateway or Route Server: **Use this virtual network's gateway or Route Server**
+ * Peering link name: This value disappears when you select Classic for the virtual network deployment model.
+ * Virtual network deployment model: **Classic**
+ * I know my resource ID: Leave blank. You only need to select this if you don't have read access to the virtual network or subscription you want to peer with.
:::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-classic.png" alt-text="Add peering page for Spoke-Classic" lightbox="./media/vpn-gateway-peering-gateway-transit/peering-classic.png"::: 1. Verify the subscription is correct, then select the virtual network from the dropdown. 1. Select **Add** to add the peering.
-1. Verify the peering status as **Connected** on the Hub-RM virtual network.
+1. Verify the peering status as **Connected** on the Hub-RM virtual network.
For this configuration, you don't need to configure anything on the **Spoke-Classic** virtual network. Once the status shows **Connected**, the spoke virtual network can use the connectivity through the VPN gateway in the hub virtual network. ### <a name="ps-different"></a>PowerShell sample
-You can also use PowerShell to create or update the peering with the example above. Replace the variables and subscription ID with the values of your virtual network and resource groups, and subscription. You only need to create virtual network peering on the hub virtual network.
+You can also use PowerShell to create or update the peering. Replace the variables and subscription ID with the values of your virtual network and resource groups, and subscription. You only need to create virtual network peering on the hub virtual network.
```azurepowershell-interactive $HubRG = "HubRG1"
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
+|941180|Node-Validator Blocklist Keywords|
|941190|XSS using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript|
The following rule groups and rules are available when you use Azure Web Applica
|941370|JavaScript global variable found| |941380|AngularJS client side template injection detected|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-21"></a> SQLI: SQL injection |RuleId|Description| |||
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|
-|941180|Node-Validator Blacklist Keywords.|
+|941180|Node-Validator Blocklist Keywords.|
|941190|XSS Using style sheets.| |941200|XSS using VML frames.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).|
The following rule groups and rules are available when you use Azure Web Applica
|941370|JavaScript global variable found.| |941380|AngularJS client side template injection detected.|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-20"></a> SQLI: SQL injection |RuleId|Description| |||
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|
-|941180|Node-Validator Blacklist Keywords.|
+|941180|Node-Validator Blocklist Keywords.|
|941190|IE XSS Filters - Attack Detected.| |941200|IE XSS Filters - Attack Detected.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) found.|
The following rule groups and rules are available when you use Azure Web Applica
|941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-11"></a> SQLI: SQL injection |RuleId|Description| |||
The following rule groups and rules are available when you use Azure Web Applica
|941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|
-|941180|Node-Validator Blacklist Keywords.|
+|941180|Node-Validator Blocklist Keywords.|
|941190|XSS Using style sheets.| |941200|XSS using VML frames.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).|
The following rule groups and rules are available when you use Azure Web Applica
|941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.|
->[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
- ### <a name="drs942-10"></a> SQLI: SQL injection |RuleId|Description| |||
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
You can configure a geo-filtering policy for your Azure Front Door instance by u
| TM | Turkmenistan| | TN | Tunisia| | TO | Tonga|
-| TR | Turkey|
+| TR | T├╝rkiye|
| TT | Trinidad and Tobago| | TV | Tuvalu| | TW | Taiwan|
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
The following rule groups and rules are available when using Web Application Fir
|941150|XSS Filter - Category 5: Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|
-|941180|Node-Validator Blacklist Keywords|
+|941180|Node-Validator Blocklist Keywords|
|941190|XSS Using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript|
The following rule groups and rules are available when using Web Application Fir
|941380|AngularJS client side template injection detected| >[!NOTE]
-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
### <a name="drs942-21"></a> SQLI - SQL Injection |RuleId|Description|
web-application-firewall Rate Limiting Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-configure.md
+
+ Title: Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+
+description: Learn how to configure rate limit custom rules for Application Gateway WAF v2.
+++ Last updated : 08/16/2023++++
+# Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+
+> [!IMPORTANT]
+> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Rate limiting enables you to detect and block abnormally high levels of traffic destined for your application. Rate Limiting works by counting all traffic that that matches the configured Rate Limit rule and performing the configured action for traffic matching that rule which exceeds the configured threshold. For more information, see [Rate limiting overview](rate-limiting-overview.md).
+
+## Configure Rate Limit Custom Rules
+
+Use the following information to configure Rate Limit Rules for Application Gateway WAFv2.
+
+**Scenario One** - Create rule to rate-limit traffic by Client IP that exceed the configured threshold, matching all traffic.
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 200 for Rate limit threshold (requests)
+1. Select Client address for Group rate limit traffic by
+1. Under Conditions, choose IP address for Match Type
+1. For Operation, select the Does not contain radio button
+1. For match condition, under IP address or range, enter 255.255.255.255/32
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator IPMatch -MatchValue 255.255.255.255/32 -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName ClientAddr
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name ClientIPRateLimitRule -Priority 90 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name ClientIPRateLimitRule --priority 90 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"ClientAddr"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator IPMatch --policy-name ExamplePolicy --name ClientIPRateLimitRule --resource-group ExampleRG --value 255.255.255.255/32 --negate true
+```
+* * *
+
+**Scenario Two** - Create Rate Limit Custom Rule to match all traffic except for traffic originating from the United States. Traffic will be grouped, counted and rate limited based on the GeoLocation of the Client Source IP address
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 500 for Rate limit threshold (requests)
+1. Select Geo location for Group rate limit traffic by
+1. Under Conditions, choose Geo location for Match Type
+1. In the Match variables section, select RemoteAddr for Match variable
+1. Select the Is not radio button for operation
+1. Select United States for Country/Region
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator GeoMatch -MatchValue "US" -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariablde -VariableName GeoLocation
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name GeoRateLimitRule -Priority 95 -RateLimitDuration OneMin -RateLimitThreshold 500 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name GeoRateLimitRule --priority 95 --rule-type RateLimitRule --rate-limit-threshold 500 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"GeoLocation"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator GeoMatch --policy-name ExamplePolicy --name GeoRateLimitRule --resource-group ExampleRG --value US --negate true
+```
+* * *
+
+**Scenario Three** - Create Rate Limit Custom Rule matching all traffic for the login page, and using the GroupBy None variable. This will group and count all traffic which matches the rule as one, and apply the action across all traffic matching the rule (/login).
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 100 for Rate limit threshold (requests)
+1. Select None for Group rate limit traffic by
+1. Under Conditions, choose String for Match Type
+1. In the Match variables section, select RequestUri for Match variable
+1. Select the Is not radio button for operation
+1. For Operator select contains
+1. Enter Login page path for match Value. In this example we use /login
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestUri
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator Contains -MatchValue "/login" -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName None
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name LoginRateLimitRule -Priority 99 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name LoginRateLimitRule --priority 99 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"None"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RequestUri --operator Contains --policy-name ExamplePolicy --name LoginRateLimitRule --resource-group ExampleRG --value '/login'
+```
+* * *
+
+## Next steps
+
+[Customize web application firewall rules](application-gateway-customize-waf-rules-portal.md)
web-application-firewall Rate Limiting Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-overview.md
+
+ Title: Azure Web Application Firewall (WAF) rate limiting (preview)
+description: This article is an overview of Azure Web Application Firewall (WAF) on Application Gateway rate limiting.
++++ Last updated : 08/16/2023+++
+# What is rate limiting for Web Application Firewall on Application Gateway (preview)?
+
+> [!IMPORTANT]
+> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Rate limiting for Web Application Firewall on Application Gateway (preview) allows you to detect and block abnormally high levels of traffic destined for your application. By using rate limiting on Application Gateway WAF_v2, you can mitigate many types of denial-of-service attacks, protect against clients that have accidentally been misconfigured to send large volumes of requests in a short time period, or control traffic rates to your site from specific geographies.
+
+## Rate limiting policies
+
+Rate limiting is configured using custom WAF rules in a policy.
+
+When you configure a rate limit rule, you must specify the threshold: the number of requests allowed within the specified time period. Rate limiting on Application Gateway WAF_v2 uses a sliding window algorithm to determine when traffic has breached the threshold and needs to be dropped. During the first window where the threshold for the rule is breached, any more traffic matching the rate limit rule is dropped. From the second window onwards, traffic up to the threshold within the window configured is allowed, producing a throttling effect.
+
+You must also specify a match condition, which tells the WAF when to activate the rate limit. You can configure multiple rate limit rules that match different variables and paths within your policy.
+
+Application Gateway WAF_v2 also introduces a *GroupByUserSession*, which must be configured. The *GroupByUserSession* specifies how requests are grouped and counted for a matching rate limit rule.
+
+The following three *GroupByVariables* are currently available:
+- *ClientAddr* ΓÇô This is the default setting and it means that each rate limit threshold and mitigation applies independently to every unique source IP address.
+- *GeoLocation* - Traffic is grouped by their geography based on a Geo-Match on the client IP address. So for a rate limit rule, traffic from the same geography is grouped together.
+- *None* - All traffic is grouped together and counted against the threshold of the Rate Limit rule. When the threshold is breached, the action triggers against all traffic matching the rule and doesn't maintain independent counters for each client IP address or geography. It's recommended to use *None* with specific match conditions such as a sign-in page or a list of suspicious User-Agents.
+
+## Rate limiting details
+
+The configured rate limit thresholds are counted and tracked independently for each endpoint the Web Application Firewall policy is attached to. For example, a single WAF policy attached to five different listeners maintains independent counters and threshold enforcement for each of the listeners.
+
+The rate limit thresholds aren't always enforced exactly as defined, so it shouldn't be used for fine-grain control of application traffic. Instead, it's recommended for mitigating anomalous rates of traffic and for maintaining application availability.
+
+The sliding window algorithm blocks all matching traffic for the first window in which the threshold is exceeded, and then throttles traffic in future windows. Use caution when defining thresholds for configuring wide-matching rules with either *GeoLocation* or *None* as the *GroupByVariables*. Incorrectly configured thresholds could lead to frequent short outages for matching traffic.
++
+## Next step
+
+- [Create rate limiting custom rules for Application Gateway WAF v2 (preview)](rate-limiting-configure.md)
web-application-firewall Waf Sensitive Data Protection Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection-configure.md
Previously updated : 06/13/2023 Last updated : 08/15/2023 # How to mask sensitive data on Azure Web Application Firewall
$logScrubbingRuleConfig = New-AzApplicationGatewayFirewallPolicyLogScrubbingConf
``` #### [CLI](#tab/cli)
-The Azure CLI commands to enable and configure Sensitive Data Protection are coming soon.
+Use the following Command Line Interface commands to [create and configure](/cli/azure/network/application-gateway/waf-policy/policy-setting) Log Scrubbing rules for Sensitive Data Protection:
+```CLI
+az network application-gateway waf-policy policy-setting update -g <MyResourceGroup> --policy-name <MyPolicySetting> --log-scrubbing-state <Enabled/Disabled> --scrubbing-rules "[{state:<Enabled/Disabled>,match-variable:<MatchVariable>,selector-match-operator:<Operator>,selector:<Selector>}]"
+```