Updates from: 08/18/2022 01:08:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Previously updated : 06/23/2022 Last updated : 08/17/2022
The following providers offer FIDO2 security keys of different form factors that
| Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | | OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido | | Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ |
-| Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices |
+| Thales Group | ![n] | ![y]| ![y]| ![n]| ![y] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices |
| Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key | | TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ |
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
Previously updated : 07/21/2022 Last updated : 08/17/2022
+zone_pivot_groups: enterprise-apps-all
+ #customer intent: As an admin, I want to hide an enterprise application from user's experience so that it is not listed in the user's Active directory access portals or Microsoft 365 launchers
Learn how to hide enterprise applications in Azure Active Directory. When an app
## Hide an application from the end user + Use the following steps to hide an application from My Apps portal and Microsoft 365 application launcher. 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator for your directory.
Use the following steps to hide an application from My Apps portal and Microsoft
1. Select **No** for the **Visible to users?** question. 1. Select **Save**. + > [!NOTE] > These instructions apply only to Enterprise applications.
-## Use Azure AD PowerShell to hide an application
-To hide an application from the My Apps portal, you can manually add the HideApp tag to the service principal for the application. Run the following [AzureAD PowerShell](/powershell/module/azuread/#service_principals) commands to set the application's **Visible to Users?** property to **No**.
+
+To hide an application from the My Apps portal, you can manually add the HideApp tag to the service principal for the application. Run the following AzureAD PowerShell commands to set the application's **Visible to Users?** property to **No**.
```PowerShell Connect-AzureAD
$tags = $servicePrincipal.tags
$tags += "HideApp" Set-AzureADServicePrincipal -ObjectId $objectId -Tags $tags ```++
+To hide an application from the My Apps portal, you can manually add the HideApp tag to the service principal for the application. Run the following Microsoft Graph PowerShell commands to set the application's **Visible to Users?** property to **No**.
+
+```PowerShell
+Connect-MgGraph
+
+$servicePrincipal = Get-MgServicePrincipal -ServicePrincipalId $objectId
+$tags = $servicePrincipal.tags
+$tags += "HideApp"
+Update-MgServicePrincipal -ServicePrincipalID $objectId -Tags $tags
+```
++
+To hide an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), run the following queries.
+
+1. Get the application you want to hide.
+
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/5f214ccd-3f74-41d7-b683-9a6d845eea4d
+ ```
+1. Update the application to hide it from users.
+
+ ```http
+ PATCH https://graph.microsoft.com/v1.0/servicePrincipals/5f214ccd-3f74-41d7-b683-9a6d845eea4d/
+ ```
+
+ Supply the following request body.
+
+ ```json
+ {
+ "tags": [
+ "HideApp"
+ ]
+ }
+ ```
+
+ >[!WARNING]
+ >If the application has other tags, you must include them in the request body. Otherwise, the query will overwrite them.
++ ## Hide Microsoft 365 applications from the My Apps portal
Use the following steps to hide all Microsoft 365 applications from the My Apps
1. For **Users can only see Office 365 apps in the Office 365 portal**, select **Yes**. 1. Select **Save**. ## Next steps - [Remove a user or group assignment from an enterprise app](./assign-user-or-group-access-portal.md)
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md
If you created a test application named **Azure AD SAML Toolkit 1** that was use
Learn how to delete an enterprise application. > [!div class="nextstepaction"]
-> [Delete an application](add-application-portal.md)
+> [Delete an application](delete-application-portal.md)
active-directory Lucid All Products Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucid-all-products-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Lucid (All Products) for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Lucid (All Products).
+
+documentationcenter: ''
+
+writer: Thwimmer
+
+ms.assetid: 54a47643-8703-4ab9-96a5-a803b344ccc4
+++
+ms.devlang: na
+ Last updated : 07/20/2022+++
+# Tutorial: Configure Lucid (All Products) for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Lucid (All Products) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Lucid (All Products)](https://www.lucid.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Lucid (All Products).
+> * Remove users in Lucid (All Products) when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Lucid (All Products).
+> * Provision groups and group memberships in Lucid (All Products).
+> * [Single sign-on](./lucid-tutorial.md) to Lucid (All Products) (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Lucid (All Products) with Admin rights.
+* Confirm that you are on an Enterprise account with an up-to-date pricing plan. To upgrade, please contact our sales team.
+* Contact your Lucidchart Customer Success Manager so that they can enable SCIM for your account.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Lucid (All Products)](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Lucid (All Products) to support provisioning with Azure AD
+
+1. Log in to [Lucid Admin Console](https://lucid.app/). Navigate to **Admin**.
+1. Click **App integration** in the left-hand menu.
+1. Select the **SCIM** tile.
+1. Click **Generate Token**. Lucid will populate the **Bearer Token** text field with a unique code for you to share with Azure.Copy and save the **Bearer token**. This value will be entered in the **Secret Token** * field in the Provisioning tab of your Lucid(All Products) application in the Azure portal.
+
+ ![Screenshot of token generation.](media/lucid-all-products-provisioning-tutorial/generate-token.png)
+
+## Step 3. Add Lucid (All Products) from the Azure AD application gallery
+
+Add Lucid (All Products) from the Azure AD application gallery to start managing provisioning to Lucid (All Products). If you have previously setup Lucid (All Products) for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Lucid (All Products)
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Lucid (All Products) based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Lucid (All Products) in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Lucid (All Products)**.
+
+ ![Screenshot of the Lucid (All Products) link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Lucid (All Products) Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Lucid (All Products). If the connection fails, ensure your Lucid (All Products) account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Lucid (All Products)**.
+
+1. Review the user attributes that are synchronized from Azure AD to Lucid (All Products) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Lucid (All Products) for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Lucid (All Products) API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Lucid (All Products)|
+ |||||
+ |userName|String|✓|✓
+ |emails[type eq "work"].value|String||✓
+ |active|Boolean||
+ |name.givenName|String||
+ |name.familyName|String||
+ |urn:ietf:params:scim:schemas:extension:lucid:2.0:User:billingCode|String||
+ |urn:ietf:params:scim:schemas:extension:lucid:2.0:User:productLicenses.Lucidchart|String||
+ |urn:ietf:params:scim:schemas:extension:lucid:2.0:User:productLicenses.Lucidspark|String||
+ |urn:ietf:params:scim:schemas:extension:lucid:2.0:User:productLicenses.LucidscaleExplorer|String||
+ |urn:ietf:params:scim:schemas:extension:lucid:2.0:User:productLicenses.LucidscaleCreator|String||
++
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Lucid (All Products)**.
+
+1. Review the group attributes that are synchronized from Azure AD to Lucid (All Products) in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Lucid (All Products) for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Lucid (All Products)|
+ |||||
+ |displayName|String|✓|✓
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Lucid (All Products), change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Lucid (All Products) by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Netmotion Mobility Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netmotion-mobility-tutorial.md
- Title: 'Tutorial: Azure AD SSO integration with NetMotion Mobility'
-description: Learn how to configure single sign-on between Azure Active Directory and NetMotion Mobility.
-------- Previously updated : 08/07/2022----
-# Tutorial: Azure AD SSO integration with NetMotion Mobility
-
-In this tutorial, you'll learn how to integrate NetMotion Mobility with Azure Active Directory (Azure AD). When you integrate NetMotion Mobility with Azure AD, you can:
-
-* Control in Azure AD who has access to NetMotion Mobility.
-* Enable your users to be automatically signed-in to NetMotion Mobility with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* NetMotion Mobility single sign-on (SSO) enabled subscription.
-* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
-For more information, see [Azure built-in roles](../roles/permissions-reference.md).
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* NetMotion Mobility supports **SP** initiated SSO.
-* NetMotion Mobility supports **Just In Time** user provisioning.
-
-## Add NetMotion Mobility from the gallery
-
-To configure the integration of NetMotion Mobility into Azure AD, you need to add NetMotion Mobility from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **NetMotion Mobility** in the search box.
-1. Select **NetMotion Mobility** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
-## Configure and test Azure AD SSO for NetMotion Mobility
-
-Configure and test Azure AD SSO with NetMotion Mobility using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in NetMotion Mobility.
-
-To configure and test Azure AD SSO with NetMotion Mobility, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure NetMotion Mobility SSO](#configure-netmotion-mobility-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create NetMotion Mobility test user](#create-netmotion-mobility-test-user)** - to have a counterpart of B.Simon in NetMotion Mobility that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **NetMotion Mobility** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-
-1. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<MobilityServerName>.<CustomerDomain>.<extension>/`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<MobilityServerName>.<CustomerDomain>.<extension>/saml/login`
-
- c. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<MobilityServerName>.<CustomerDomain>.<extension>/`
-
- > [!Note]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [NetMotion Mobility Client support team](mailto:nm-support@absolute.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
-
-1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
-
- ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-
-1. On the **Set up NetMotion Mobility** section, copy the appropriate URL(s) based on your requirement.
-
- ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to NetMotion Mobility.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **NetMotion Mobility**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure NetMotion Mobility SSO
-
-To configure single sign-on on **NetMotion Mobility** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [NetMotion Mobility support team](mailto:nm-support@absolute.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create NetMotion Mobility test user
-
-In this section, a user called B.Simon is created in NetMotion Mobility. NetMotion Mobility supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in NetMotion Mobility, a new one is created after authentication.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-* Click on **Test this application** in Azure portal. This will redirect to NetMotion Mobility Sign on URL where you can initiate the login flow.
-
-* Go to NetMotion Mobility Sign on URL directly and initiate the login flow from there.
-
-* You can use Microsoft My Apps. When you click the NetMotion Mobility tile in the My Apps, this will redirect to NetMotion Mobility Sign on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
-## Next steps
-
-Once you configure NetMotion Mobility you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Snowflake Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-tutorial.md
Previously updated : 07/14/2022 Last updated : 08/16/2022 # Tutorial: Azure AD SSO integration with Snowflake
To configure Azure AD integration with Snowflake, you need the following items:
* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/). * Snowflake single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-4. In the **Basic SAML Configuration** section, perform the following steps, if you wish to configure the application in **IDP** initiated mode:
+1. In the **Basic SAML Configuration** section, perform the following steps, if you wish to configure the application in **IDP** initiated mode:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<SNOWFLAKE-URL>.snowflakecomputing.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<SNOWFLAKE-URL>.snowflakecomputing.com/fed/login`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in SP initiated mode:
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<SNOWFLAKE-URL>.snowflakecomputing.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<SNOWFLAKE-URL>.snowflakecomputing.com/fed/logout` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Snowflake Client support team](https://support.snowflake.net/s/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Logout URL. Contact [Snowflake Client support team](https://support.snowflake.net/s/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
-6. On the **Set up Snowflake** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up Snowflake** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Select the **All Queries** and click **Run**.
-
-![Snowflake sql](./media/snowflake-tutorial/certificate.png)
+ ![Snowflake sql](./media/snowflake-tutorial/certificate.png)
-```
-CREATE [ OR REPLACE ] SECURITY INTEGRATION [ IF NOT EXISTS ]
+ ```
+ CREATE [ OR REPLACE ] SECURITY INTEGRATION [ IF NOT EXISTS ]
TYPE = SAML2 ENABLED = TRUE | FALSE SAML2_ISSUER = '<EntityID/Issuer value which you have copied from the Azure portal>' SAML2_SSO_URL = '<Login URL value which you have copied from the Azure portal>'
- SAML2_PROVIDER = 'AzureAD'
+ SAML2_PROVIDER = 'CUSTOM'
SAML2_X509_CERT = '<Paste the content of downloaded certificate from Azure portal>' [ SAML2_SP_INITIATED_LOGIN_PAGE_LABEL = '<string_literal>' ] [ SAML2_ENABLE_SP_INITIATED = TRUE | FALSE ]
CREATE [ OR REPLACE ] SECURITY INTEGRATION [ IF NOT EXISTS ]
[ SAML2_FORCE_AUTHN = TRUE | FALSE ] [ SAML2_SNOWFLAKE_ISSUER_URL = '<string_literal>' ] [ SAML2_SNOWFLAKE_ACS_URL = '<string_literal>' ]
-```
+ ```
If you are using a new Snowflake URL with an organization name as the login URL, it is necessary to update the following parameters:
- Alter the integration to add Snowflake Issuer URL and SAML2 Snowflake ACS URL, please follow the step-6 in [this](https://community.snowflake.com/s/article/HOW-TO-SETUP-SSO-WITH-ADFS-AND-THE-SNOWFLAKE-NEW-URL-FORMAT-OR-PRIVATELINK) article for more information.
+Alter the integration to add Snowflake Issuer URL and SAML2 Snowflake ACS URL, please follow the step-6 in [this](https://community.snowflake.com/s/article/HOW-TO-SETUP-SSO-WITH-ADFS-AND-THE-SNOWFLAKE-NEW-URL-FORMAT-OR-PRIVATELINK) article for more information.
1. [ SAML2_SNOWFLAKE_ISSUER_URL = '<string_literal>' ]
To enable Azure AD users to log in to Snowflake, they must be provisioned into S
use role accountadmin; CREATE USER britta_simon PASSWORD = '' LOGIN_NAME = 'BrittaSimon@contoso.com' DISPLAY_NAME = 'Britta Simon'; ```
->[!NOTE]
->Manually provisioning is uneccesary, if users and groups are provisioned with a SCIM integration. See how to enable auto provisioning for [Snowflake](snowflake-provisioning-tutorial.md).
+> [!NOTE]
+> Manually provisioning is uneccesary, if users and groups are provisioned with a SCIM integration. See how to enable auto provisioning for [Snowflake](snowflake-provisioning-tutorial.md).
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign-on URL where you can initiate the login flow.
-* Go to Snowflake Sign on URL directly and initiate the login flow from there.
+* Go to Snowflake Sign-on URL directly and initiate the login flow from there.
#### IDP initiated: * Click on **Test this application** in Azure portal and you should be automatically signed in to the Snowflake for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Snowflake tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Snowflake for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Snowflake tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Snowflake for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Splunkenterpriseandsplunkcloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splunkenterpriseandsplunkcloud-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Azure AD SSO for Splunk Enterprise and Splunk Cloud | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Azure AD SSO for Splunk Enterprise and Splunk Cloud'
description: Learn how to configure single sign-on between Azure Active Directory and Azure AD SSO for Splunk Enterprise and Splunk Cloud.
Previously updated : 07/16/2021 Last updated : 08/05/2022
-# Tutorial: Azure Active Directory integration with Azure AD SSO for Splunk Enterprise and Splunk Cloud
+# Tutorial: Azure AD SSO integration with Azure AD SSO for Splunk Enterprise and Splunk Cloud
In this tutorial, you'll learn how to integrate Azure AD SSO for Splunk Enterprise and Splunk Cloud with Azure Active Directory (Azure AD). When you integrate Azure AD SSO for Splunk Enterprise and Splunk Cloud with Azure AD, you can:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Azure AD SSO for Splunk Enterprise and Splunk Cloud SSO
- To configure single sign-on on **Azure AD SSO for Splunk Enterprise and Splunk Cloud** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Azure AD SSO for Splunk Enterprise and Splunk Cloud support team](https://www.splunk.com/en_us/about-splunk/contact-us.html). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Log in to the Splunk Enterprise and Splunk Cloud website as an administrator.
+
+1. Go to the **Settings > Access Controls** menu option.
+
+1. Click on the **Authentication method** link. Click on the **SAML** radio button
+
+1. Click on the **Configure Splunk to use SAML** link below the SAML radio button.
+
+ ![Screenshot that shows Configure Splunk to use SAML.](./media/splunk-enterprise-and-splunk-cloud-tutorial/configure-splunk.png)
+
+1. Perform the following steps in the **SAML Configuration** section:
+
+ ![Screenshot that shows Configure Splunk to SAML configuration.](./media/splunk-enterprise-and-splunk-cloud-tutorial/sso-configuration.png)
+
+ a. Click on the **Select File** button to upload the **Federation Metadata XML** file, which you have downloaded from Azure portal.
+
+ b. In the **Entity ID** field, enter the **Identifier** value, which you have copied from the Azure portal.
+
+ c. Check the **Verify SAML response** checkbox.This will be a requirement moving forward in Splunk Cloud for security best practices, so please make sure this is checked.
+
+1. Scroll down within the configuration dialogue and click on the **Alias** section. Enter the following values in each attribute:
+
+ a. **Role alias**: `http://schemas.microsoft.com/ws/2008/06/identity/claims/groups`
+
+ b.**RealName alias**: `http://schemas.microsoft.com/identity/claims/displayname`
+
+ c. **Mail alias**: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`
+
+ ![Screenshot that shows role mapping.](./media/splunk-enterprise-and-splunk-cloud-tutorial/role-alias.png)
+
+1. Scroll down to the **Advanced Settings** section and perform the following steps:
+
+ ![Screenshot that shows Advanced Settings.](./media/splunk-enterprise-and-splunk-cloud-tutorial/advanced-settings.png)
+
+ a. Click the **Name Id Format** and select **Email Address** from the dropdown.
+
+ b. In the **Fully qualified domain name or IP of the load balancer** text box, enter the value as: `https://<acme>.splunkcloud.com`.
+
+ c. Set **Redirect port ΓÇô load balancer port** to `0(zero)` and click **Save**.
+
+1. Click on the green **New Group** button in the upper right hand corner of the SAML Groups configuration screen in Splunk.
+
+1. In the **Create new SAML Group** configuration dialogue, paste in the first Object ID into the **Group Name** field. Then choose one or more **Splunk Roles** that you wish to map to users that are assigned to that group from the **Available Item(s)** box; the items you choose will populate over into the **Selected Item(s)** box. Click the green **Save** button once finished.
### Create Azure AD SSO for Splunk Enterprise and Splunk Cloud test user
active-directory Webcargo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webcargo-tutorial.md
Previously updated : 03/21/2022 Last updated : 08/01/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Webcargo SSO
-To configure single sign-on on **Webcargo** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Webcargo support team](mailto:tickets@webcargo.uservoice.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. To automate the configuration within Webcargo, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up Webcargo** will direct you to the Webcargo Single Sign-On application. From there, provide the admin credentials to sign in to Webcargo Single Sign-On. The browser extension will automatically configure the application for you and automate steps 3-6.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to set up Webcargo manually, open a new web browser window and sign in to your Webcargo company site as an administrator and perform the following steps:
+
+1. Click on **Team** at the left side navigation and select **SSO idP** tab then, enable the **Microsoft Azure**.
+
+ ![Configure Single Sign-On settings icon](./media/webcargo-tutorial/webcargo-team.png)
+
+1. In the **Azure Configuration** section, in the Login URL textbox, paste the **Login URL** value which you have copied from the Azure portal, click **Choose File** to upload the **Certificate (Base64)** file which you have downloaded from the Azure portal.
+
+ ![Configure Single Sign-On Choose File](./media/webcargo-tutorial/xml-choose.png)
+
+1. Click **Save**.
### Create Webcargo test user
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Webcargo for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Webcargo tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Webcargo for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Webcargo tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Webcargo for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
active-directory Admin Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md
description: Learn how to manage your verifiable credential deployment using Admin API. documentationCenter: '' -+
The response contains the following properties
| Property | Type | Description | | -- | -- | -- |
-|`attestations`| [idTokenAttestation](#idtokenattestation-type) or [idTokenHintAttestation](#idtokenhintattestation-type) and/or [verifiablePresentationAttestation](#verifiablepresentationattestation-type) and/or [selfIssuedAttestation](#selfissuedattestation-type) and/or [accessTokenAttestation](#accesstokenattestation-type) (array) | describing supported inputs for the rules |
+|`attestations`| [attestions](#attestations-type)| describing supported inputs for the rules |
|`validityInterval` | number | this value shows the lifespan of the credential | |`vc`| vcType array | types for this contract | |`customStatusEndpoint`| [customStatusEndpoint] (#customstatusendpoint-type) (optional) | status endpoint to include in the verifiable credential for this contract | If the property `customStatusEndpoint` property isn't specified then the `anonymous` status endpoint is used.
+#### attestations type
+
+| Property | Type | Description |
+| -- | -- | -- |
+|`idTokens`| [idTokenAttestation](#idtokenattestation-type) (array) (optional) | describes id token inputs|
+|`idTokenHints`| [idTokenHintAttestation](#idtokenhintattestation-type) (array) (optional) | describes id token hint inputs |
+|`presentations`| [verifiablePresentationAttestation](#verifiablepresentationattestation-type) (array) (optional) | describes verifiable presentations inputs |
+|`selfIssued`| [selfIssuedAttestation](#selfissuedattestation-type) (array) (optional) | describes self issued inputs |
+|`accessTokens`| [accessTokenAttestation](#accesstokenattestation-type) (array) (optional) | describes access token inputs |
+ #### idTokenAttestation type | Property | Type | Description |
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Title: Customize your Microsoft Entra Verified ID
description: This article shows you how to create your own custom verifiable credential. -+
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
Title: Introduction to Microsoft Entra Verified ID
description: An overview Azure Verifiable Credentials. -+ editor:
In order to be able to resolve DID documents, DIDs are typically recorded on an
Enables real people to use decentralized identities and Verifiable Credentials. Authenticator creates DIDs, facilitates issuance and presentation requests for verifiable credentials and manages the backup of your DID's seed through an encrypted wallet file. **4. Microsoft Resolver**.
-An API that connects to our ION node to look up and resolve DIDs using the ```did:ion``` method and return the DID Document Object (DDO). The DDO includes DPKI metadata associated with the DID such as public keys and service endpoints.
+An API that look up and resolve DIDs using the ```did:web``` or the ```did:ion``` methods and return the DID Document Object (DDO). The DDO includes DPKI metadata associated with the DID such as public keys and service endpoints.
-**5. Azure Active Directory Verified Credentials Service**.
-An issuance and verification service in Azure and a REST API for [W3C Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) that are signed with the ```did:ion``` method. They enable identity owners to generate, present, and verify claims. This forms the basis of trust between users of the systems.
+**5. Entra Verified ID Service**.
+An issuance and verification service in Azure and a REST API for [W3C Verifiable Credentials](https://www.w3.org/TR/vc-data-model/) that are signed with the ```did:web``` or the ```did:ion``` method. They enable identity owners to generate, present, and verify claims. This forms the basis of trust between users of the systems.
## A sample scenario
active-directory Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/error-codes.md
description: Reference of error codes for Microsoft Entra Verified ID APIs documentationCenter: '' -+
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Title: How to call the Request Service REST API
description: Learn how to issue and verify by using the Request Service REST API documentationCenter: '' -+
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
Title: Create a free Azure Active Directory developer tenant
description: This article shows you how to create a developer account. -+
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Title: Link your Domain to your Decentralized Identifier (DID) - Microsoft Entra
description: Learn how to DNS Bind? documentationCenter: '' -+
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Title: How to Revoke a Verifiable Credential as an Issuer - Entra Verified ID
description: Learn how to revoke a Verifiable Credential that you've issued documentationCenter: '' -+
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Title: Opt out of the Microsoft Entra Verified ID
description: Learn how to Opt Out of Entra Verified ID documentationCenter: '' -+
active-directory How To Register Didwebsite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-register-didwebsite.md
Title: How to register your website ID
description: Learn how to register your website ID for did:web documentationCenter: '' -+
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
Title: Create verifiable credentials for ID tokens
description: Learn how to use a quickstart to create custom credentials for ID tokens documentationCenter: '' -+
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
Title: Create verifiable credentials for self-asserted claims
description: Learn how to use a quickstart to create custom credentials for self-issued claims documentationCenter: '' -+
active-directory How To Use Quickstart Verifiedemployee https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-verifiedemployee.md
description: In this tutorial, you learn how to issue verifiable credentials, fr
-+ Last updated 06/22/2022
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
Title: Create verifiable credentials for an ID token hint
description: In this article, you learn how to use a quickstart to create a custom verifiable credential for an ID token hint. documentationCenter: '' -+
active-directory How Use Vcnetwork https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md
Title: How to use the Microsoft Entra Verified ID Network
description: In this article, you learn how to use the Microsoft Entra Verified ID Network to verify credentials documentationCenter: '' -+
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
Title: Azure Active Directory architecture overview
+ Title: Microsoft Entra Verified ID architecture overview
description: Learn foundational information to plan and design your solution documentationCenter: ''
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
description: Learn how to issue a verifiable credential that you've issued. documentationCenter: '' -+
active-directory Issuer Openid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuer-openid.md
Title: Issuer service communication examples - Entra Verified ID description: Details of communication between identity provider and issuer service -+
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
description: Learn how to start a presentation request in Verifiable Credentials documentationCenter: '' -+
active-directory Rules And Display Definitions Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/rules-and-display-definitions-model.md
Title: Rules and Display Definition Reference
description: Rules and Display Definition Reference documentationCenter: '' -+
active-directory Vc Network Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/vc-network-api.md
description: Learn how to use the Entra Verified ID Network API documentationCenter: '' -+
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
description: In this tutorial, you learn how to issue verifiable credentials by
-+ Previously updated : 06/16/2022 Last updated : 08/16/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account.
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, please read this [FAQ](verifiable-credentials-faq.md#i-can-not-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
In this step, you create the verified credential expert card by using Microsoft
1. For **Credential name**, enter **VerifiedCredentialExpert**. This name is used in the portal to identify your verifiable credentials. It's included as part of the verifiable credentials contract. 1. Copy the following JSON and paste it in the **Display definition** textbox
- ```json
- {
- "locale": "en-US",
- "card": {
- "title": "Verified Credential Expert",
- "issuedBy": "Microsoft",
- "backgroundColor": "#000000",
- "textColor": "#ffffff",
- "logo": {
- "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
- "description": "Verified Credential Expert Logo"
- },
- "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials."
- },
- "consent": {
- "title": "Do you want to get your Verified Credential?",
- "instructions": "Sign in with your account to get your card."
- },
- "claims": [
- {
- "claim": "vc.credentialSubject.firstName",
- "label": "First name",
- "type": "String"
- },
- {
- "claim": "vc.credentialSubject.lastName",
- "label": "Last name",
- "type": "String"
- }
- ]
- }
- ```
+
+ ```json
+ {
+ "locale": "en-US",
+ "card": {
+ "title": "Verified Credential Expert",
+ "issuedBy": "Microsoft",
+ "backgroundColor": "#000000",
+ "textColor": "#ffffff",
+ "logo": {
+ "uri": "https://didcustomerplayground.blob.core.windows.net/public/VerifiedCredentialExpert_icon.png",
+ "description": "Verified Credential Expert Logo"
+ },
+ "description": "Use your verified credential to prove to anyone that you know all about verifiable credentials."
+ },
+ "consent": {
+ "title": "Do you want to get your Verified Credential?",
+ "instructions": "Sign in with your account to get your card."
+ },
+ "claims": [
+ {
+ "claim": "vc.credentialSubject.firstName",
+ "label": "First name",
+ "type": "String"
+ },
+ {
+ "claim": "vc.credentialSubject.lastName",
+ "label": "Last name",
+ "type": "String"
+ }
+ ]
+ }
+ ```
1. Copy the following JSON and paste it in the **Rules definition** textbox ```JSON
In this step, you create the verified credential expert card by using Microsoft
} ```
- 1. Select **Create**.
+ 1. Select **Create**.
The following screenshot demonstrates how to create a new credential:
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
description: In this tutorial, you learn how to configure your tenant to support
-+ Previously updated : 06/27/2022 Last updated : 08/11/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Last updated 06/27/2022
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Microsoft Entra Verified ID safeguards your organization with an identity solution that's seamless and decentralized. The service allows you to issue and verify credentials. For issuers, Azure AD provides a service that they can customize and use to issue their own verifiable credentials. For verifiers, the service provides a free REST API that makes it easy to request and accept verifiable credentials in your apps and services.
+Microsoft Entra Verified ID is a decentralized identity solution that helps you safeguard your organization. The service allows you to issue and verify credentials. Issuers can use the Verified ID service to issue their own customized verifiable credentials. Verifiers can use the service's free REST API to easily request and accept verifiable credentials in their apps and services.
-In this tutorial, you learn how to configure your Azure AD tenant so it can use the verifiable credentials service.
+In this tutorial, you learn how to configure your Azure AD tenant to use the verifiable credentials service.
Specifically, you learn how to:
Specifically, you learn how to:
> - Set up the Verifiable Credentials service. > - Register an application in Azure AD.
-The following diagram illustrates the Microsoft Entra Verified ID architecture and the component you configure.
+The following diagram illustrates the Verified ID architecture and the component you configure.
+
-![Diagram that illustrates the Microsoft Entra Verified ID architecture.](media/verifiable-credentials-configure-tenant/verifiable-credentials-architecture.png)
## Prerequisites
The following diagram illustrates the Microsoft Entra Verified ID architecture a
## Create a key vault
-[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verifiable
-Credentials service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
+[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verified ID service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
If you don't have an Azure Key Vault instance available, follow [these steps](/azure/key-vault/general/quick-create-portal) to create a key vault using the Azure portal. >[!NOTE]
->By default, the account that creates a vault is the only one with access. The Verifiable Credentials service needs access to the key vault. You must configure the key vault with an access policy that allows the account used during configuration to create and delete keys. The account used during configuration also requires permission to sign to create the domain binding for Verifiable Credentials. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
+>By default, the account that creates a vault is the only one with access. The Verified ID service needs access to the key vault. You must configure your key vault with access policies allowing the account used during configuration to create and delete keys. The account used during configuration also requires permissions to sign so that it can create the domain binding for Verified ID. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
### Set access policies for the key vault
-A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) defines whether a specified security principal can perform operations on Key Vault secrets and keys. Set access policies in your key vault for both the Microsoft Entra Verified ID service administrator account, and for the Request Service API principal that you created.
+A Key Vault [access policy](../../key-vault/general/assign-access-policy.md) defines whether a specified security principal can perform operations on Key Vault secrets and keys. Set access policies in your key vault for both the Verified ID service administrator account, and for the Request Service API principal that you created.
After you create your key vault, Verifiable Credentials generates a set of keys used to provide message security. These keys are stored in Key Vault. You use a key set for signing, updating, and recovering verifiable credentials.
-### Set access policies for the Verifiable Credentials Admin user
+### Set access policies for the Verified ID Admin user
1. In the [Azure portal](https://portal.azure.com/), go to the key vault you use for this tutorial.
After you create your key vault, Verifiable Credentials generates a set of keys
1. For **Key permissions**, verify that the following permissions are selected: **Create**, **Delete**, and **Sign**. By default, **Create** and **Delete** are already enabled. **Sign** should be the only key permission you need to update.
- ![Screenshot that shows how to configure the admin access policy.](media/verifiable-credentials-configure-tenant/set-key-vault-admin-access-policy.png)
1. To save the changes, select **Save**.
-### Set access policies for the Verifiable Credentials Service Request service principal
+### Set access policies for the Verifiable credentials service request service principal
-The Verifiable Credentials Service Request is the Request Service API, and it needs access to Key Vault in order to sign issuance and presentation requests.
+The Verifiable credentials service request is the Request Service API, and it needs access to Key Vault in order to sign issuance and presentation requests.
1. Select **+ Add Access Policy** and select the service principal **Verifiable Credentials Service Request** with AppId **3db474b9-6a0c-4840-96ac-1fceb342124**.
The Verifiable Credentials Service Request is the Request Service API, and it ne
1. To save the changes, select **Save**.
-## Set up Verifiable Credentials
+## Set up Verified ID
-To set up Microsoft Entra Verified ID, follow these steps:
+To set up Verified ID, follow these steps:
1. In the [Azure portal](https://portal.azure.com/), search for *Verified ID*. Then, select **Verified ID**.
To set up Microsoft Entra Verified ID, follow these steps:
## Register an application in Azure AD
-Microsoft Entra Verified ID needs to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verifiable Credential Request Service that you set up in the previous step.
+ Verified ID needs to get access tokens to issue and verify. To get access tokens, register a web application and grant API permission for the API Verified ID Request Service that you set up in the previous step.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your administrative account.
active-directory Verifiable Credentials Configure Verifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-verifier.md
description: In this tutorial, you learn how to configure your tenant to verify
-+ Previously updated : 06/16/2022 Last updated : 08/16/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
In this article, you learn how to:
- If you want to clone the repository that hosts the sample app, install [Git](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download) or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account.
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, please read this [FAQ](verifiable-credentials-faq.md#i-can-not-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
active-directory Verifiable Credentials Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-faq.md
Title: Frequently asked questions - Azure Verifiable Credentials description: Find answers to common questions about Verifiable Credentials -+ Previously updated : 06/02/2022 Last updated : 08/11/2022 # Customer intent: As a developer I am looking for information on how to enable my users to control their own information
This page contains commonly asked questions about Verifiable Credentials and Dec
### What is a DID?
-Decentralized Identifers (DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains this in further detail.
+Decentralized Identifiers (DIDs) are unique identifiers that can be used to secure access to resources, sign and verify credentials, and facilitate application data exchange. Unlike traditional usernames and email addresses, DIDs are owned and controlled by the entity itself (be it a person, device, or company). DIDs exist independently of any external organization or trusted intermediary. [The W3C Decentralized Identifier spec](https://www.w3.org/TR/did-core/) explains DIDs in further detail.
### Why do we need a DID?
Individuals owning and controlling their identities are able to exchange verifia
### What is a Verifiable Credential?
-Credentials are a part of our daily lives; driver's licenses are used to assert that we're capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model/) explains this in further detail.
+Credentials are a part of our daily lives; driver's licenses are used to assert that we're capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. Verifiable Credentials provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable. [The W3C Verifiable Credentials spec](https://www.w3.org/TR/vc-data-model/) explains verifiable credentials in further detail.
## Conceptual questions
Yes! The following repositories are the open-sourced components of our services.
There are no special licensing requirements to issue Verifiable credentials. All you need is An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-### Updating the VC Service configuration
-The following instructions will take 15 mins to complete and are only required if you have been using the Entra Verified ID service prior to April 25, 2022. You are required to execute these steps to update the existing service principals in your tenant that run the verifiable credentials service the following is an overview of the steps:
-1. Register new service principals for the Azure AD Verifiable Service
-1. Update the Key Vault access policies
-1. Update the access to your storage container
-1. Update configuration on your Apps using the Request API
-1. Cleanup configuration (after May 6, 2022)
-
-#### **1. Register new service principals for the Azure AD Verifiable Service**
-1. Run the following PowerShell commands. These commands install and import the Azure PowerShell module. For more information, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps#installation).
-
- ```azurepowershell
- if((get-module -listAvailable -name "az.accounts") -eq $null){install-module -name "az.accounts" -scope currentUser}
- if ((get-module -listAvailable -name "az.resources") -eq $null){install-module "az.resources" -scope currentUser}
- ```
-1. Run the following PowerShell command to connect to your Azure AD tenant. Replace ```<your tenant ID>``` with your [Azure AD tenant ID](../fundamentals/active-directory-how-to-find-tenant.md)
-
- ```azurepowershell
- connect-azaccount -tenantID <your tenant ID>
- ```
-1. Check if of the following Service principals have been added to your tenant by running the following command:
-
- ```azurepowershell
- get-azADServicePrincipal -applicationID "bb2a64ee-5d29-4b07-a491-25806dc854d3"
- get-azADServicePrincipal -applicationID "3db474b9-6a0c-4840-96ac-1fceb342124f"
- ```
-
-1. If you don't get any results, run the commands below to create the new service principals, if the above command results in one of the service principals is already in your tenant, you don't need to recreate it. If you try to add it through the command below, you'll get an error saying the service principle already exists.
-
- ```azurepowershell
- new-azADServicePrincipal -applicationID "bb2a64ee-5d29-4b07-a491-25806dc854d3"
- new-azADServicePrincipal -applicationID "3db474b9-6a0c-4840-96ac-1fceb342124f"
- ```
-
- >[!NOTE]
- >The AppId ```bb2a64ee-5d29-4b07-a491-25806dc854d3``` and ```3db474b9-6a0c-4840-96ac-1fceb342124f``` refer to the new Verifiable Credentials service principals.
-
-#### **2. Update the Key Vault access policies**
-
-Add an access policy for the **Verifiable Credentials Service**.
-
->[!IMPORTANT]
-> At this time, do not remove any permissions!
-
-1. In the Azure portal, navigate to your key vault.
-1. Under **Settings**, select **Access policies**
-1. Select **+ Add Access Policy**
-1. Under **Key permissions**, select **Get** and **Sign**.
-1. In the **Select Service principal** section, search for Verifiable Credentials service by entering **bb2a64ee-5d29-4b07-a491-25806dc854d3**.
-1. Select **Add**.
-
-Add an access policy for the Verifiable Credentials Service Request.
-
-1. Select **+ Add Access Policy**
-1. Under **Key permissions**, select **Get** and **Sign**.
-1. In the **Select Service principal** section search for **3db474b9-6a0c-4840-96ac-1fceb342124f** which is the Verifiable Credentials Service Request part of Azure AD Free
-1. Select **Add**.
-1. Select **Save** to save your changes
-
-#### **3. Update the access to your storage container**
-
-We need to do this for the storage accounts used to store verifiable credentials rules and display files.
-
-1. Find the correct storage account and open it.
-1. From the list of containers, open select the container that you are using for the Verifiable Credentials service.
-1. From the menu, select Access Control (IAM).
-1. Select + Add, and then select Add role assignment.
-1. In Add role assignment:
- 1. For the Role, select Storage Blob Data Reader. Select Next
- 1. For the Assign access to, select User, group, or service principal.
- 1. Then +Select members and search for Verifiable Credentials Service (make sure this is the exact name, since there are several similar service principals!) and hit Select
- 1. Select Review + assign
-
-#### **4. Update configuration on your Apps using the Request API**
-
-Grant the new service principal permissions to get access tokens
-
-1. In your application. Select **API permissions** > **Add a permission**.
-1. Select **APIs my organization uses**.
-1. Search for **Verifiable Credentials Service Request** and select it. Make sure you aren't selecting the **Verifiable Credential Request Service**. Before proceeding, confirm that the **Application Client ID** is ```3db474b9-6a0c-4840-96ac-1fceb342124f```
-1. Choose **Application Permission**, and expand **VerifiableCredential.Create.All**.
-1. Select **Add permissions**.
-1. Select **Grant admin consent for** ```<your tenant name>```.
-
-Adjust the API scopes used in your application
-
-For the Request API the new scope for your application or Postman is now:
-
-```3db474b9-6a0c-4840-96ac-1fceb342124f/.default ```
### How do I reset the Entra Verified ID service?
Resetting requires that you opt out and opt back into the Entra Verified ID serv
1. Follow the [opt-out](how-to-opt-out.md) instructions. 1. Go over the Entra Verified ID [deployment steps](verifiable-credentials-configure-tenant.md) to reconfigure the service.
- 1. If you are in the European region, it's recommended that your Azure Key Vault and container are in the same European region otherwise you may experience some performance and latency issues. Create new instances of these services in the same EU region as needed.
-1. Finish [setting up](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials) your verifiable credentials service. You need to recreate your credentials.
+ 1. If you are in the European region, it's recommended that your Azure Key Vault, and container are in the same European region otherwise you may experience some performance and latency issues. Create new instances of these services in the same EU region as needed.
+1. Finish [setting up](verifiable-credentials-configure-tenant.md#set-up-verified-id) your verifiable credentials service. You need to recreate your credentials.
1. If your tenant needs to be configured as an issuer, it's recommended that your storage account is in the European region as your Verifiable Credentials service. 2. You also need to issue new credentials because your tenant now holds a new DID.
Yes, after reconfiguring your service, your tenant has a new DID use to issue an
No, at this point it isn't possible to keep your tenant's DID after you have opt-out of the service.
+### I can not use ngrok, what do I do?
+
+The tutorials for deploying and running the [samples](verifiable-credentials-configure-issuer.md#prerequisites) describes the use of the `ngrok` tool as an application proxy. This tool is sometimes blocked by IT admins from being used in corporate networks. An alternative is to deploy the sample to [Azure AppServices](../../app-service/overview.md) and run it in the cloud. The following links helps you deploy the respective sample to Azure AppServices. The Free pricing tier will be sufficient for hosting the sample. For each tutorial, you need to start by first creating the Azure AppService instance, then skip creating the app since you already have an app and then continue the tutorial with deploying it.
+
+- Dotnet - [Publish to AppServices](../../app-service/quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs#publish-your-web-app)
+- Node - [Deploy to AppServices](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode#deploy-to-azure)
+- Java - [Deploy to AppServices](../../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven#4deploy-the-app). You need to add the maven plugin for Azure AppServices to the sample.
+- Python - [Deploy using VSCode](../../app-service/quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#3deploy-your-application-code-to-azure)
+
+Regardless of which language of the sample you are using, they will pickup the Azure AppService hostname (https://something.azurewebsites.net/) and use it as the public endpoint. You don't need to configure something extra to make it work. If you make changes to the code or configuration, you need to redeploy the sample to Azure AppServices. Troubleshooting/debugging will not be as easy as running the sample on your local machine, where traces to the console window shows you errors, but you can achieve almost the same by using the [Log Stream](../../app-service/troubleshoot-diagnostic-logs.md#stream-logs).
+
## Next steps - [Customize your verifiable credentials](credential-design.md)
active-directory Verifiable Credentials Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-standards.md
Title: Microsoft Entra Verified ID-supported standards
description: This article outlines current and upcoming standards -+
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Title: What's new for Microsoft Entra Verified ID description: Recent updates for Microsoft Entra Verified ID -+ Previously updated : 07/29/2022 Last updated : 08/16/2022
This article lists the latest features, improvements, and changes in the Microso
Microsoft Entra Verified ID is now generally available (GA) as the new member of the Microsoft Entra portfolio! [read more](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-verified-id-now-generally-available/ba-p/3295506) ### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by 08/20/22.
+- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential will get a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by August 20, 2022.
## July 2022
Microsoft Entra Verified ID is now generally available (GA) as the new member of
- Request Service API **[Error codes](error-codes.md)** have been **updated** - The **[Admin API](admin-api.md)** is made **public** and is documented. The Azure portal is using the Admin API and with this REST API you can automate the onboarding or your tenant and creation of credential contracts. - Find issuers and credentials to verify via the [The Microsoft Entra Verified ID Network](how-use-vcnetwork.md).-- For migrating your Azure Storage based credentials to become Managed Credentials there is a PowerShell script in the [GitHub samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task.
+- For migrating your Azure Storage based credentials to become Managed Credentials there's a PowerShell script in the [GitHub samples repo](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/contractmigration/scripts/contractmigration) for the task.
- We also made the following updates to our Plan and design docs: - (updated) [architecture planning overview](introduction-to-verifiable-credentials-architecture.md).
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## June 2022 -- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you will need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
+- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verified-id). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you'll need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service).
- We are rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform: - Introducing Managed Credentials, Managed Credentials are verifiable credentials that no longer use of Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions. - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).
Microsoft Entra Verified ID is now generally available (GA) as the new member of
## May 2022
-We are expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a [small change](verifiable-credentials-faq.md#updating-the-vc-service-configuration) to avoid service disruptions.
+We are expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a small change to avoid service disruptions.
## April 2022
-Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. Follow [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to apply the required configuration changes.
+Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions.
+ >[!IMPORTANT]
-> If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Microsoft Entra Verified ID Service. [Update service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration).
+> If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Microsoft Entra Verified ID Service.
## March 2022
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
aks Use Psa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-psa.md
Last updated 08/08/2022
Pod Security Admission enforces Pod Security Standards policies on pods running in a namespace. Pod Security Admission is enabled by default in AKS and is controlled by adding labels to a namespace. For more information about Pod Security Admission, see [Enforce Pod Security Standards with Namespace Labels][kubernetes-psa]. For more information about the Pod Security Standards used by Pod Security Admission, see [Pod Security Standards][kubernetes-pss].
+Pod Security Admission is a built-in policy solution for single cluster implementations. If you are looking for enterprise-grade policy, then [Azure policy](use-azure-policy.md) is a better choice.
+ ## Before you begin - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
applied-ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/data-feeds-from-different-sources.md
Use this article to find the settings and requirements for connecting different
|[MongoDB](#mongodb) | Basic | |[MySQL](#mysql) | Basic | |[PostgreSQL](#pgsql) | Basic|
-|[Local files (CSV)](#csv) | Basic|
The following sections specify the parameters required for all authentication types within different data source scenarios.
The following sections specify the parameters required for all authentication ty
* **JSON format version**: Defines the data schema in the JSON files. Metrics Advisor supports the following versions. You can choose one to fill in the field:
- * **v1** (default value)
+ * **v1**
Only the metrics *Name* and *Value* are accepted. For example:
For more information, refer to the [tutorial on writing a valid query](tutorials
``` For more information, refer to the [tutorial on writing a valid query](tutorials/write-a-valid-query.md).
-## <span id="csv">Local files (CSV)</span>
-
-> [!NOTE]
-> This feature is only used for quick system evaluation focusing on anomaly detection. It only accepts static data from a local CSV, and performs anomaly detection on single time series data. For analyzing multi-dimensional metrics, including real-time data ingestion, anomaly notification, root cause analysis, and cross-metric incident analysis, use other supported data sources.
-
-**Requirements on data in CSV:**
-- Have at least one column, which represents measurements to be analyzed. For a better and quicker user experience, try a CSV file that contains two columns: a timestamp column and a metric column. The timestamp format should be as follows: `2021-03-30T00:00:00Z`, and the `seconds` part is best to be `:00Z`. The time granularity between every record should be the same.-- The timestamp column is optional. If there's no timestamp, Metrics Advisor will use timestamp starting from today (`00:00:00` Coordinated Universal Time). The service maps each measure in the row at a one-hour interval.-- There is no re-ordering or gap-filling happening during data ingestion. Make sure that your data in the CSV file is ordered by the timestamp ordering **ascending (ASC)**.
-
## Next steps * While you're waiting for your metric data to be ingested into the system, read about [how to manage data feed configurations](how-tos/manage-data-feeds.md).
-* When your metric data is ingested, you can [configure metrics and fine tune detection configuration](how-tos/configure-metrics.md).
+* When your metric data is ingested, you can [configure metrics and fine tune detection configuration](how-tos/configure-metrics.md).
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
Title: Enable geo-replication (preview) description: Learn how to use Azure App Configuration geo replication to create, delete, and manage replicas of your configuration store. -+ ms.devlang: csharp Last updated 8/1/2022-+ #Customer intent: I want to be able to list, create, and delete the replicas of my configuration store.
# Enable geo-replication (Preview)
-This article covers replication of Azure App Configuration stores. You'll learn about how to create and delete a replica in your configuration store.
+This article covers replication of Azure App Configuration stores. You'll learn about how to create and delete a replica in your configuration store.
To learn more about the concept of geo-replication, see [Geo-replication in Azure App Configuration](./concept-soft-delete.md).
To learn more about the concept of geo-replication, see [Geo-replication in Azur
- An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet) - We assume you already have an App Configuration store. If you want to create one, [create an App Configuration store](quickstart-aspnet-core-app.md). - ## Create and list a replica
-To create a replica of your configuration store in the portal, follow the steps below.
+To create a replica of your configuration store in the portal, follow the steps below.
<!-- ### [Portal](#tab/azure-portal) --> 1. In your App Configuration store, under **Settings**, select **Geo-replication**.
-1. Under **Replica(s)**, select **Create**. Choose the location of your new replica in the dropdown, then assign the replica a name. This replica name must be unique.
+1. Under **Replica(s)**, select **Create**. Choose the location of your new replica in the dropdown, then assign the replica a name. This replica name must be unique.
-
- :::image type="content" source="./media/how-to-geo-replication-create-flow.png" alt-text="Screenshot of the Geo Replication button being highlighted as well as the create button for a replica.":::
-
+ :::image type="content" source="./media/how-to-geo-replication-create-flow.png" alt-text="Screenshot of the Geo Replication button being highlighted as well as the create button for a replica.":::
-1. Select **Create**.
-1. You should now see your new replica listed under Replica(s). Check that the status of the replica is "Succeeded", which indicates that it was created successfully.
+1. Select **Create**.
+1. You should now see your new replica listed under Replica(s). Check that the status of the replica is "Succeeded", which indicates that it was created successfully.
-
:::image type="content" source="media/how-to-geo-replication-created-replica-successfully.png" alt-text="Screenshot of the list of replicas that have been created for the configuration store."::: - <!-- ### [Azure CLI](#tab/azure-cli) 1. In the CLI, run the following code to create a replica of your configuration store.
To create a replica of your configuration store in the portal, follow the steps
``` --> - ## Delete a replica
-To delete a replica in the portal, follow the steps below.
+To delete a replica in the portal, follow the steps below.
<!-- ### [Portal](#tab/azure-portal) --> 1. In your App Configuration store, under **Settings**, select **Geo-replication**.
-1. Under **Replica(s)**, select the **...** to the right of the replica you want to delete. Select **Delete** from the dropdown.
+1. Under **Replica(s)**, select the **...** to the right of the replica you want to delete. Select **Delete** from the dropdown.
- :::image type="content" source="./media/how-to-geo-replication-delete-flow.png" alt-text=" Screenshot showing the three dots on the right of the replica being selected, showing you the delete option.":::
-
+ :::image type="content" source="./media/how-to-geo-replication-delete-flow.png" alt-text=" Screenshot showing the three dots on the right of the replica being selected, showing you the delete option.":::
-1. Verify the name of the replica to be deleted and select **OK** to confirm.
-1. Once the process is complete, check the list of replicas that the correct replica has been deleted.
+1. Verify the name of the replica to be deleted and select **OK** to confirm.
+1. Once the process is complete, check the list of replicas that the correct replica has been deleted.
<!-- ### [Azure CLI](#tab/azure-cli)
To delete a replica in the portal, follow the steps below.
--> - ## Next steps+ > [!div class="nextstepaction"]
-> [Geo-replication concept](./concept-soft-delete.md)
+> [Geo-replication concept](./concept-geo-replication.md)
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Here's a list of troubleshooting guides for common issues:
* [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror) * [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
+* [Troubleshoot Errors with Protobuf](recover-python-functions.md#troubleshoot-errors-with-protocol-buffers)
All known issues and feature requests are tracked through the [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
If your function app is using the popular ODBC database driver [pyodbc](https://
+## Troubleshoot errors with Protocol Buffers
+
+Version 4.x.x of the Protocol Buffers (protobuf) package introduces breaking changes. Because the Python worker process for Azure Functions relies on v3.x.x of this package, pinning your function app to use v4.x.x can break your app. At this time, you should also avoid using any libraries that themselves require protobuf v4.x.x.
+
+Example error logs:
+```bash
+ [Information] File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/protos/shared/NullableTypes_pb2.py", line 38, in <module>
+ [Information] _descriptor.FieldDescriptor(
+ [Information] File "/home/site/wwwroot/.python_packages/lib/site-packages/google/protobuf/descriptor.py", line 560, in __new__
+ [Information] _message.Message._CheckCalledFromGeneratedFile()
+ [Error] TypeError: Descriptors cannot not be created directly.
+ [Information] If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
+ [Information] If you cannot immediately regenerate your protos, some other possible workarounds are:
+ [Information] 1. Downgrade the protobuf package to 3.20.x or lower.
+ [Information] 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
+ [Information] More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
+```
+There are two ways to mitigate this issue.
+++ Set the application setting [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to a value of `1`. ++ Pin protobuf to a non-4.x.x. version, as in the following example:
+ ```
+ protobuf >= 3.19.3, == 3.*
+ ```
+++ ## Next steps If you're unable to resolve your issue, please report this to the Functions team:
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Service | FedRAMP High | DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | | - |::|:-:|:-:|:-:|:-:| | [Advisor](../../advisor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [AI Builder](/ai-builder/) | &#x2705; | &#x2705; | | | |
+| [AI Builder](/ai-builder/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Analysis Services](../../analysis-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [API Management](../../api-management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [App Configuration](../../azure-app-configuration/index.yml) | &#x2705; | &#x2705; | &#x2705; |&#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | | |
+| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack)| &#x2705; | &#x2705; | | | |
+| [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Database for MariaDB](../../mariadb/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database for MySQL](../../mysql/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database for PostgreSQL](../../postgresql/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | | | |
+| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; | | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Cognitive
-| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | | |
+| [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Cognitive | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
-| [Dynamics 365 Finance](/dynamics365/finance/) | &#x2705; | &#x2705; | | | |
+| [Dynamics 365 Finance](/dynamics365/finance/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Sales](/dynamics365/sales/help-hub) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Dynamics 365 Supply Chain Management](/dynamics365/supply-chain/) | &#x2705; | &#x2705; | | | |
+| [Dynamics 365 Supply Chain Management](/dynamics365/supply-chain/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Event Grid](../../event-grid/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Event Hubs](../../event-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; | | | |
+| [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Query Online](/power-query/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; | | | |
-| [Route Server](../../route-server/index.yml) | &#x2705; | &#x2705; | | | |
+| [Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Route Server](../../route-server/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Service Bus](../../service-bus-messaging/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [StorSimple](../../storsimple/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | | |
| [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To send data to Log Analytics, create the data collection rule in the **same reg
1. From the **Monitor** menu, select **Data Collection Rules**. 1. Select **Create** to create a new Data Collection Rule and associations.
- [![Screenshot showing the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+ [ ![Screenshot showing the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
1. Provide a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**.
To send data to Log Analytics, create the data collection rule in the **same reg
**Platform Type** specifies the type of resources this rule can apply to. Custom allows for both Windows and Linux types.
- [![Screenshot showing the Basics tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
+ [ ![Screenshot showing the Basics tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
1. On the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) to which to associate the data collection rule. The portal will install Azure Monitor Agent on resources that don't already have it installed, and will also enable Azure Managed Identity.
To send data to Log Analytics, create the data collection rule in the **same reg
If you need network isolation using private links, select existing endpoints from the same region for the respective resources, or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
- [!Screenshot showing the Resources tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
+ [ ![Screenshot showing the Resources tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination. 1. Select a **Data source type**. 1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
- [!Screenshot of Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
+ [ ![Screenshot of Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
1. Select **Custom** to collect logs and performance counters that are not [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to [filter events using XPath queries](#filter-events-using-xpath-queries). You can then specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any specific values. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
- [!Screenshot of Azure portal form to select custom performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
+ [ ![Screenshot of Azure portal form to select custom performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types - for instance multiple Log Analytics workspaces (known as "multi-homing"). You can send Windows event and Syslog data sources can to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
- [!Screenshot of Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
+ [ ![Screenshot of Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
1. Select **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of virtual machines. 1. Select **Create** to create the data collection rule.
In Windows, you can use Event Viewer to extract XPath queries as shown below.
When you paste the XPath query into the field on the **Add data source** screen, (step 5 in the picture below), you must append the log type category followed by '!'.
-[!Screenshot of steps in Azure portal showing the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+[ ![Screenshot of steps in Azure portal showing the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
Examples of filtering events using a custom XPath:
- [Collect text logs using Azure Monitor agent.](data-collection-text-log.md) - Learn more about the [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md
Title: Optimizing log alert queries | Microsoft Docs
-description: Recommendations for writing efficient alert queries
+ Title: Optimize log alert queries | Microsoft Docs
+description: This article gives recommendations for writing efficient alert queries.
Last updated 2/23/2022
-# Optimizing log alert queries
-This article describes how to write and convert [Log Alert](./alerts-unified-log.md) queries to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently.
+# Optimize log alert queries
-## How to start writing an alert log query
+This article describes how to write and convert [log alert](./alerts-unified-log.md) queries to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently.
-Alert queries start from [querying the log data in Log Analytics](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal) that indicates the issue. You can use the [alert query examples topic](../logs/queries.md) to understand what you can discover. You may also [get started on writing your own query](../logs/log-analytics-tutorial.md).
+## Start writing an alert log query
+
+Alert queries start from [querying the log data in Log Analytics](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal) that indicates the issue. To understand what you can discover, see [Using queries in Azure Monitor Log Analytics](../logs/queries.md). You can also [get started on writing your own query](../logs/log-analytics-tutorial.md).
### Queries that indicate the issue and not the alert
-The alert flow was built to transform the results that indicate the issue to an alert. For example, in a case of a query like:
+The alert flow was built to transform the results that indicate the issue to an alert. For example, in the case of a query like:
``` Kusto SecurityEvent | where EventID == 4624 ```
-If the intent of the user is to alert, when this event type happens, the alerting logic appends `count` to the query. The query that will run will be:
+If the intent of the user is to alert, when this event type happens, the alerting logic appends `count` to the query. The query that runs will be:
``` Kusto SecurityEvent
SecurityEvent
| count ```
-There's no need to add alerting logic to the query and doing that may even cause issues. In the above example, if you include `count` in your query, it will always result in the value 1, since the alert service will do `count` of `count`.
+There's no need to add alerting logic to the query, and doing that might even cause issues. In the preceding example, if you include `count` in your query, it will always result in the value 1, because the alert service will do `count` of `count`.
-### Avoid `limit` and `take` operators
+### Avoid limit and take operators
-Using `limit` and `take` in queries can increase latency and load of alerts as the results aren't consistent over time. It's preferred you use it only if needed.
+Using `limit` and `take` in queries can increase latency and load of alerts because the results aren't consistent over time. Use them only if needed.
## Log query constraints+ [Log queries in Azure Monitor](../logs/log-query-overview.md) start with either a table, [`search`](/azure/kusto/query/searchoperator), or [`union`](/azure/kusto/query/unionoperator) operator.
-Queries for log alert rules should always start with a table to define a clear scope, which improves both query performance and the relevance of the results. Queries in alert rules run frequently, so using `search` and `union` can result in excessive overhead adding latency to the alert, as it requires scanning across multiple tables. These operators also reduce the ability of the alerting service to optimize the query.
+Queries for log alert rules should always start with a table to define a clear scope, which improves query performance and the relevance of the results. Queries in alert rules run frequently. Using `search` and `union` can result in excessive overhead that adds latency to the alert because it requires scanning across multiple tables. These operators also reduce the ability of the alerting service to optimize the query.
We don't support creating or modifying log alert rules that use `search` or `union` operators, except for cross-resource queries.
-For example, the following alerting query is scoped to the _SecurityEvent_ table and searches for specific event ID. It's the only table that the query must process.
+For example, the following alerting query is scoped to the _SecurityEvent_ table and searches for a specific event ID. It's the only table that the query must process.
``` Kusto SecurityEvent | where EventID == 4624 ```
-Log alert rules using [cross-resource queries](../logs/cross-workspace-query.md) are not affected by this change since cross-resource queries use a type of `union`, which limits the query scope to specific resources. The following example would be valid log alert query:
+Log alert rules using [cross-resource queries](../logs/cross-workspace-query.md) aren't affected by this change because cross-resource queries use a type of `union`, which limits the query scope to specific resources. The following example would be a valid log alert query:
```Kusto union
workspace('Contoso-workspace1').Perf
``` >[!NOTE]
-> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, you can learn about switching [here](../alerts/alerts-log-api-switch.md).
+> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, see [Upgrade legacy rules management to the current Azure Monitor Log Alerts API](../alerts/alerts-log-api-switch.md) to learn about switching.
## Examples
-The following examples include log queries that use `search` and `union` and provide steps you can use to modify these queries for use in alert rules.
+
+The following examples include log queries that use `search` and `union`. They provide steps you can use to modify these queries for use in alert rules.
### Example 1
-You want to create a log alert rule using the following query that retrieves performance information using `search`:
+
+You want to create a log alert rule by using the following query that retrieves performance information using `search`:
``` Kusto search *
search *
| where CounterValue < 30 ```
-To modify this query, start by using the following query to identify the table that the properties belong to:
-
-``` Kusto
-search *
-| where CounterName == '% Free Space'
-| summarize by $table
-```
+1. To modify this query, start by using the following query to identify the table that the properties belong to:
+
+ ``` Kusto
+ search *
+ | where CounterName == '% Free Space'
+ | summarize by $table
+ ```
-The result of this query would show that the _CounterName_ property came from the _Perf_ table.
+ The result of this query would show that the _CounterName_ property came from the _Perf_ table.
-You can use this result to create the following query that you would use for the alert rule:
+1. Use this result to create the following query that you would use for the alert rule:
-``` Kusto
-Perf
-| where CounterName == '% Free Space'
-| where CounterValue < 30
-```
+ ``` Kusto
+ Perf
+ | where CounterName == '% Free Space'
+ | where CounterValue < 30
+ ```
### Example 2
-You want to create a log alert rule using the following query that retrieves performance information using `search`:
+
+You want to create a log alert rule by using the following query that retrieves performance information using `search`:
``` Kusto search ObjectName =="Memory" and CounterName=="% Committed Bytes In Use"
search ObjectName =="Memory" and CounterName=="% Committed Bytes In Use"
| where Avg_Memory_Usage between(90 .. 95) ```
-To modify this query, start by using the following query to identify the table that the properties belong to:
+1. To modify this query, start by using the following query to identify the table that the properties belong to:
+
+ ``` Kusto
+ search ObjectName=="Memory" and CounterName=="% Committed Bytes In Use"
+ | summarize by $table
+ ```
+
+ The result of this query would show that the _ObjectName_ and _CounterName_ properties came from the _Perf_ table.
-``` Kusto
-search ObjectName=="Memory" and CounterName=="% Committed Bytes In Use"
-| summarize by $table
-```
-
-The result of this query would show that the _ObjectName_ and _CounterName_ property came from the _Perf_ table.
+1. Use this result to create the following query that you would use for the alert rule:
-You can use this result to create the following query that you would use for the alert rule:
-
-``` Kusto
-Perf
-| where ObjectName =="Memory" and CounterName=="% Committed Bytes In Use"
-| summarize Avg_Memory_Usage=avg(CounterValue) by Computer
-| where Avg_Memory_Usage between(90 .. 95)
-```
+ ``` Kusto
+ Perf
+ | where ObjectName =="Memory" and CounterName=="% Committed Bytes In Use"
+ | summarize Avg_Memory_Usage=avg(CounterValue) by Computer
+ | where Avg_Memory_Usage between(90 .. 95)
+ ```
### Example 3
-You want to create a log alert rule using the following query that uses both `search` and `union` to retrieve performance information:
+You want to create a log alert rule by using the following query that uses both `search` and `union` to retrieve performance information:
``` Kusto search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total")
search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceN
| summarize Avg_Idle_Time = avg(CounterValue) by Computer ```
-To modify this query, start by using the following query to identify the table that the properties in the first part of the query belong to:
+1. To modify this query, start by using the following query to identify the table that the properties in the first part of the query belong to:
-``` Kusto
-search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total")
-| summarize by $table
-```
+ ``` Kusto
+ search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total")
+ | summarize by $table
+ ```
-The result of this query would show that all these properties came from the _Perf_ table.
+ The result of this query would show that all these properties came from the _Perf_ table.
-Now use `union` with `withsource` command to identify which source table has contributed each row.
+1. Use `union` with the `withsource` command to identify which source table has contributed each row:
-``` Kusto
-union withsource=table *
-| where CounterName == "% Processor Utility"
-| summarize by table
-```
+ ``` Kusto
+ union withsource=table *
+ | where CounterName == "% Processor Utility"
+ | summarize by table
+ ```
-The result of this query would show that these properties also came from the _Perf_ table.
+ The result of this query would show that these properties also came from the _Perf_ table.
-You can use these results to create the following query that you would use for the alert rule:
+1. Use these results to create the following query that you would use for the alert rule:
-``` Kusto
-Perf
-| where ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total"
-| where Computer !in (
- (Perf
- | where CounterName == "% Processor Utility"
- | summarize by Computer))
-| summarize Avg_Idle_Time = avg(CounterValue) by Computer
-```
+ ``` Kusto
+ Perf
+ | where ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total"
+ | where Computer !in (
+ (Perf
+ | where CounterName == "% Processor Utility"
+ | summarize by Computer))
+ | summarize Avg_Idle_Time = avg(CounterValue) by Computer
+ ```
### Example 4
-You want to create a log alert rule using the following query that joins the results of two `search` queries:
+
+You want to create a log alert rule by using the following query that joins the results of two `search` queries:
```Kusto search Type == 'SecurityEvent' and EventID == '4625'
search Type == 'SecurityEvent' and EventID == '4625'
) on Hour ```
-To modify the query, start by using the following query to identify the table that contains the properties in the left side of the join:
-
-``` Kusto
-search Type == 'SecurityEvent' and EventID == '4625'
-| summarize by $table
-```
+1. To modify the query, start by using the following query to identify the table that contains the properties in the left side of the join:
+
+ ``` Kusto
+ search Type == 'SecurityEvent' and EventID == '4625'
+ | summarize by $table
+ ```
-The result indicates that the properties in the left side of the join belong to _SecurityEvent_ table.
+ The result indicates that the properties in the left side of the join belong to the _SecurityEvent_ table.
-Now use the following query to identify the table that contains the properties in the right side of the join:
+1. Use the following query to identify the table that contains the properties in the right side of the join:
-``` Kusto
-search in (Heartbeat) OSType == 'Windows'
-| summarize by $table
-```
+ ``` Kusto
+ search in (Heartbeat) OSType == 'Windows'
+ | summarize by $table
+ ```
-The result indicates that the properties in the right side of the join belong to _Heartbeat_ table.
-
-You can use these results to create the following query that you would use for the alert rule:
-
-``` Kusto
-SecurityEvent
-| where EventID == '4625'
-| summarize by Computer, Hour = bin(TimeGenerated, 1h)
-| join kind = leftouter (
- Heartbeat
- | where OSType == 'Windows'
- | summarize arg_max(TimeGenerated, Computer) by Computer , Hour = bin(TimeGenerated, 1h)
- | project Hour , Computer
-) on Hour
-```
+ The result indicates that the properties in the right side of the join belong to the _Heartbeat_ table.
+
+1. Use these results to create the following query that you would use for the alert rule:
+
+ ``` Kusto
+ SecurityEvent
+ | where EventID == '4625'
+ | summarize by Computer, Hour = bin(TimeGenerated, 1h)
+ | join kind = leftouter (
+ Heartbeat
+ | where OSType == 'Windows'
+ | summarize arg_max(TimeGenerated, Computer) by Computer , Hour = bin(TimeGenerated, 1h)
+ | project Hour , Computer
+ ) on Hour
+ ```
## Next steps+ - Learn about [log alerts](alerts-log.md) in Azure Monitor. - Learn about [log queries](../logs/log-query-overview.md).
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
# Application Insights overriding default endpoints
+> [!WARNING]
+> Endpoint modification is not recommended. [Transition to connection strings](migrate-from-instrumentation-keys-to-connection-strings.md#migrate-from-application-insights-instrumentation-keys-to-connection-strings) to simplify configuration and eliminate the need for endpoint modification.
+ To send data from Application Insights to certain regions, you'll need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article. These changes require adjusting the sample code and replacing the placeholder values for `QuickPulse_Endpoint_Address`, `TelemetryChannel_Endpoint_Address`, and `Profile_Query_Endpoint_address` with the actual endpoint addresses for your specific region. The end of this article contains links to the endpoint addresses for regions where this configuration is required. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
To configure your Azure Storage account to use CMK with Azure Key Vault, use the
## Link storage accounts to your Log Analytics workspace > [!NOTE]
-> You can connect up to five storage accounts for the ingestion of Custom logs & IIS logs, and one storage account for Saved queries and Saved log alert queries (each).
+> - Delending if you link storage account for queries, or for log alerts, existing queries will be removed from workspace. Copy saved searches and log alerts that you need before this configuration. You can find directions for moving saved queries and log alerts in [workspace move procedure](./move-workspace-region.md).
+> - You can connect up to five storage accounts for the ingestion of Custom logs & IIS logs, and one storage account for Saved queries and Saved log alert queries (each).
### Using the Azure portal On the Azure portal, open your Workspace' menu and select *Linked storage accounts*. A blade will open, showing the linked storage accounts by the use cases mentioned above (Ingestion over Private Link, applying CMK to saved queries or to alerts).
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
* The Azure NetApp Files AD connection admin account must have the following properties: * It must be an AD DS domain user account in the same domain where the Azure NetApp Files machine accounts are created. * It must have the permission to create machine accounts (for example, AD domain join) in the AD DS organizational unit path specified in the **Organizational unit path option** of the AD connection.
- * It cannot be a [Group Managed Service Account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview.md).
+ * It cannot be a [Group Managed Service Account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview).
* The AD connection admin account supports DES, Kerberos AES-128, and Kerberos AES-256 encryption types for authentication with AD DS for Azure NetApp Files machine account creation (for example, AD domain join operations).
azure-resource-manager Microsoft Solutions Resourceselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-solutions-resourceselector.md
Title: ResourceSelector UI element description: Describes the Microsoft.Solutions.ResourceSelector UI element for Azure portal. Used for getting a list of existing resources. -- Previously updated : 07/13/2020 -+ Last updated : 08/16/2022 # Microsoft.Solutions.ResourceSelector UI element
-ResourceSelector lets users select an existing resource from a subscription.
+The `ResourceSelector` user-interface (UI) element lets users select an existing Azure resource from a subscription. You specify the resource provider namespace and resource type, like `Microsoft.Storage/storageAccounts` in the element's JSON. You can use the element to filter the list by subscription or location. From the element's UI, to search within the list's contents, you can type a filter like resource group name, resource name, or a partial name.
## UI sample
-![Microsoft.Solutions.ResourceSelector](./media/managed-application-elements/microsoft-solutions-resourceselector.png)
+In this example, the element's location is set to `all`. The list shows all storage accounts in the subscription. You can use the filter box to search within the list.
+++
+In this example, the element's location is set to `onBasics`. The list shows storage accounts that exist in the location that was selected on the **Basics** tab. You can use the filter box to search within the list.
++
+When you use the element to restrict the subscription to `onBasics` the UI doesn't show the subscription name in the list. You can use the filter box to search within the list.
+ ## Schema ```json {
- "name": "storageSelector",
- "type": "Microsoft.Solutions.ResourceSelector",
- "label": "Select storage accounts",
- "resourceType": "Microsoft.Storage/storageAccounts",
- "options": {
- "filter": {
- "subscription": "onBasics",
- "location": "onBasics"
- }
+ "name": "storageSelector",
+ "type": "Microsoft.Solutions.ResourceSelector",
+ "label": "Select storage accounts",
+ "resourceType": "Microsoft.Storage/storageAccounts",
+ "options": {
+ "filter": {
+ "subscription": "onBasics",
+ "location": "onBasics"
}
+ }
} ``` ## Sample output ```json
-"name": "{resource-name}",
"id": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/{resource-provider-namespace}/{resource-type}/{resource-name}",
-"location": "{deployed-location}"
+"location": "{deployed-location}",
+"name": "{resource-name}"
``` ## Remarks
-In the `resourceType` property, provide the resource provider namespace and resource type name for the resource you wish to show in the list.
-
-The `filter` property restricts the available options for the resources. You can restrict the results by location or subscription. To show only resources that match the selection in basics, use `onBasics`. To show all resource, use `all`. The default value is `all`.
+- In the `resourceType` property, provide the resource provider namespace and resource type name for the resource you wish to show in the list. For more information, see the [resource providers](/azure/templates/) reference documentation.
+- The `filter` property restricts the available options for the resources. You can restrict the results by location or subscription.
+ - `all`: Shows all resources and is the default value.
+ - `onBasics`: Shows only resources that match the selection on the **Basics** tab.
+ - If you omit the `filter` property from the _createUiDefinition.json_ file, all resources for the specified resource type are shown in the list.
## Next steps
-* For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
-* For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
+- For an introduction to creating UI definitions, see [CreateUiDefinition.json for Azure managed application's create experience](create-uidefinition-overview.md).
+- For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
azure-signalr Signalr Tutorial Authenticate Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md
description: In this tutorial, you learn how to authenticate Azure SignalR Servi
Previously updated : 03/01/2019 Last updated : 08/17/2022 ms.devlang: javascript
The web app also requires an HTTP API to send chat messages. You will create an
## Create and run the chat client web user interface
-The chat application's UI is a simple single page application (SPA) created with the Vue JavaScript framework. It will be hosted separately from the function app. Locally, you will run the web interface using the Live Server VS Code extension.
+The chat application's UI is a simple single page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). It will be hosted separately from the function app. Locally, you will run the web interface using the Live Server VS Code extension.
1. In VS Code, create a new folder named **content** at the root of the main project folder.
In this tutorial, you learned how to use Azure Functions with Azure SignalR Serv
> [!div class="nextstepaction"] > [Build Real-time Apps with Azure Functions](signalr-concept-azure-functions.md)
-[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
+[Having issues? Let us know.](https://aka.ms/asrs/qsauth)
azure-vmware Enable Managed Snat For Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md
With this capability, you:
## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge.
-## Configure Outbound Internet access using Managed SNAT in the Azure portal
+## Configure Outbound Internet access using Managed SNAT in the Azure port
1. Log in to the Azure portal and then search for and select **Azure VMware Solution**. 2. Select the Azure VMware Solution private cloud. 1. In the left navigation, under **Workload Networking**, select **Internet Connectivity**.
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
description: Learn how to deploy Bastion using settings that you specify - Azure
Previously updated : 08/15/2022 Last updated : 08/17/2022
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
* A [virtual network](../virtual-network/quick-create-portal.md). This will be the VNet to which you deploy Bastion. * A virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md). * **Required VM roles:**
- * Reader role on the virtual machine.
- * Reader role on the NIC with private IP of the virtual machine.
+
+ * Reader role on the virtual machine.
+ * Reader role on the NIC with private IP of the virtual machine.
* **Required inbound ports:**
- * For Windows VMs - RDP (3389)
- * For Linux VMs - SSH (22)
+
+ * For Windows VMs - RDP (3389)
+ * For Linux VMs - SSH (22)
> [!NOTE] > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. On the page for your virtual network, in the left pane, select **Bastion** to open the **Bastion** page.
-1. On the Bastion page, select **I want to configure Azure Bastion on my own** to configure manually. This lets you configure specific additional settings when deploying Bastion to your VNet.
+1. On the Bastion page, select **Configure manually**. This lets you configure specific additional settings when deploying Bastion to your VNet.
- :::image type="content" source="./media/tutorial-create-host-portal/configure-manually.png" alt-text="Screenshot of Bastion page showing configure bastion on my own." lightbox="./media/tutorial-create-host-portal/configure-manually.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/manual-configuration.png" alt-text="Screenshot of Bastion page showing configure bastion on my own." lightbox="./media/tutorial-create-host-portal/manual-configuration.png":::
1. On the **Create a Bastion** page, configure the settings for your bastion host. Project details are populated from your virtual network values. Configure the **Instance details** values.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
:::image type="content" source="./media/tutorial-create-host-portal/instance-values.png" alt-text="Screenshot of Bastion page instance values." lightbox="./media/tutorial-create-host-portal/instance-values.png":::
-1. Configure the **virtual networks** settings. Select the VNet from the dropdown. If you don't see your VNet in the dropdown list, make sure you selected the correct Region in the previous settings on this page.
+1. Configure the **virtual networks** settings. Select your VNet from the dropdown. If you don't see your VNet in the dropdown list, make sure you selected the correct Region in the previous settings on this page.
1. To configure the AzureBastionSubnet, select **Manage subnet configuration**.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. At the top of the **Subnets** page, select **Create a Bastion** to return to the Bastion configuration page.
- :::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-a-bastion.png":::
+ :::image type="content" source="./media/tutorial-create-host-portal/create-page.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-page.png":::
1. The **Public IP address** section is where you configure the public IP address of the Bastion host resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating. Create a new IP address. You can leave the default naming suggestion.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
## <a name="connect"></a>Connect to a VM
-You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
+You can use any of the following detailed articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
[!INCLUDE [Links to Connect to VM articles](../../includes/bastion-vm-connect-article-list.md)]
+You can also use the basic [Connection steps](#steps) in the section below to connect to your VM.
+ ### <a name="steps"></a>Connection steps [!INCLUDE [Connect to a VM](../../includes/bastion-vm-connect.md)]
your resources using the following steps:
In this tutorial, you deployed Bastion to a virtual network and connected to a VM. You then removed the public IP address from the VM. Next, learn about and configure additional Bastion features. > [!div class="nextstepaction"]
-> [Bastion features and configuration settings](configuration-settings.md)
+> [Bastion features and configuration settings](configuration-settings.md)<br>
+> [Bastion - VM connections and features](vm-about.md)
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
To restrict access to these nodes and reduce the discoverability of these nodes
1. Pools without public IP addresses must use Virtual Machine Configuration and not Cloud Services Configuration. 1. [Custom endpoint configuration](pool-endpoint-configuration.md) to Batch compute nodes doesn't work with pools without public IP addresses. 1. Because there are no public IP addresses, you can't [use your own specified public IP addresses](create-pool-public-ip.md) with this type of pool.
+1. [Basic VM size](../virtual-machines/sizes-previous-gen.md#basic-a) doesn't work with pools without public IP addresses.
## Create a pool without public IP addresses in the Azure portal
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
In this how-to guide, you'll learn how to upload and install all the required co
## Supported software
-ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03**.
+ACSS supports the following SAP software version: **S/4HANA 1909 SPS 03, SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00**.
+Following is the operating system (OS) software versions compatibility with SAP Software Version:
-ACSS supports the following operating system (OS) software versions:
-
-| Publisher | Version | Generation SKU | Patch version name |
-| | - | -- | |
-| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 |
-| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 |
-| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 |
-| SUSE | sles-sap-12-sp4 | gen2 | 2022.02.01
+| Publisher | Version | Generation SKU | Patch version name | Supported SAP Software Version |
+| | - | -- | | |
+| Red Hat | RHEL-SAP-HA (8.2 HA Pack) | 82sapha-gen2 | 8.2.2021091202 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
+| Red Hat | RHEL-SAP-HA (8.4 HA Pack) | 84sapha-gen2 | 8.4.2021091202 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
+| SUSE | sles-sap-15-sp3 | gen2 | 2022.01.26 | S/4HANA 1909 SPS 03,SAP S/4HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00 |
+| SUSE | sles-sap-12-sp4 | gen2 | 2022.02.01 | S/4HANA 1909 SPS 03 |
## Required components
The following components are necessary for the SAP installation:
- SAP software installation media (part of the `sapbits` container described later in this article) - All essential SAP packages (*SWPM*, *SAPCAR*, etc.)
- - SAP software (for example, *4 HANA 1909 SPS 03*)
+ - SAP software (for example, *S/4HANA 1909 SPS 03, S/4 HANA 2020 SPS 03, SAP S/4HANA 2021 ISS 00*)
- Supporting software packages for the installation process - `pip3` version `pip-21.3.1.tar.gz` - `wheel` version 0.37.1 - `jq` version 1.6 - `ansible` version 2.9.27 - `netaddr` version 0.8.0-- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`) and dependent BOMs (`HANA_2_00_059_v0002ms.yaml`, `SUM20SP14_latest.yaml`, `SWPM20SP11_latest.yaml`). They provide the following information:
+- The SAP Bill of Materials (BOM), as generated by ACSS. These YAML files list all required SAP packages for the SAP software installation. There's a main BOM (`S41909SPS03_v0011ms.yaml`, `S42020SPS03_v0003ms.yaml`, `S4HANA_2021_ISS_v0001ms.yaml`) and there are dependent BOMs (`HANA_2_00_059_v0003ms.yaml`, `HANA_2_00_063_v0001ms.yaml` `SUM20SP14_latest.yaml`, `SWPM20SP12_latest.yaml`). They provide the following information:
- The full name of the SAP package (`name`) - The package name with its file extension as downloaded (`archive`) - The checksum of the package as specified by SAP (`checksum`)
After setting up your Azure Storage account, you can download the SAP installati
- For `<username>`, use your SAP username. - For `<password>`, use your SAP password.
+ - For `<bom_base_name>`, use the SAP Version you want to install i.e. **_S41909SPS03_v0011ms_** or **_S42020SPS03_v0003ms_** or **_S4HANA_2021_ISS_v0001ms_**
- For `<storageAccountAccessKey>`, use your storage account's access key. You found this value in the [previous section](#set-up-storage-account). - For `<containerBasePath>`, use the path to your `sapbits` container. You found this value in the [previous section](#set-up-storage-account). The format is `https://<your-storage-account>.blob.core.windows.net/sapbits` + ```azurecli ansible-playbook ./sap-automation/deploy/ansible/playbook_bom_downloader.yaml -e "bom_base_name=S41909SPS03_v0011ms" -e "deployer_kv_name=dummy_value" -e "s_user=<username>" -e "s_password=<password>" -e "sapbits_access_key=<storageAccountAccessKey>" -e "sapbits_location_base_path=<containerBasePath>" ``` + Now, you can [install the SAP software](#install-software) using the installation wizard. ## Upload components manually
You also can [run scripts to automate this process](#upload-components-with-scri
1. Go to the **sapfiles** folder. 1. Create two subfolders named **archives** and **boms**. 1. In the **boms** folder, create four subfolders as follows.
- 1. **HANA_2_00_059_v0002ms**
++
+ - For S/4HANA 1909 SPS 03, make following folders
+ 1. **HANA_2_00_059_v0003ms**
1. **S41909SPS03_v0011ms** 1. **SWPM20SP12_latest**
- 1. **SUM20SP14_latest.yaml**
+ 1. **SUM20SP14_latest**
+
+
+ - For S/4 HANA 2020 SPS 03, make following folders
+ 1. **HANA_2_00_063_v0001ms**
+ 1. **S42020SPS03_v0003ms**
+ 1. **SWPM20SP12_latest**
+ 1. **SUM20SP14_latest**
+
+
+ - For SAP S/4HANA 2021 ISS 00, make following folders
+ 1. **HANA_2_00_063_v0001ms**
+ 1. **S4HANA_2021_ISS_v0001ms**
+ 1. **SWPM20SP12_latest**
+ 1. **SUM20SP14_latest**
+
1. Upload the following YAML files to the folders with the same name.
- 1. [S41909SPS03_v0011ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
- 1. [HANA_2_00_059_v0002ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/HANA_2_00_059_v0002ms/HANA_2_00_059_v0002ms.yaml)
- 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
- 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
-1. Go to the **S41909SPS03_v0011ms** folder and create a subfolder named **templates**.
+
+ - For S/4HANA 1909 SPS 03,
+ 1. [S41909SPS03_v0011ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
+ 1. [HANA_2_00_059_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_059_v0003ms/HANA_2_00_059_v0003ms.yaml)
+ 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
+ 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+
+ - For S/4 HANA 2020 SPS 03,
+ 1. [S42020SPS03_v0003ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
+ 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
+ 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+
+ - For SAP S/4HANA 2021 ISS 00,
+ 1. [S4HANA_2021_ISS_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
+ 1. [HANA_2_00_063_v0001ms.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [SWPM20SP12_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
+ 1. [SUM20SP14_latest.yaml](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+
+1. Depending upon the SAP product version you are installing go to **S41909SPS03_v0011ms** or **S42020SPS03_v0003ms** or **S4HANA_2021_ISS_v0001ms** folder and create a subfolder named **templates**.
1. Download the following files. Then, upload all the files to the **templates** folder.
- 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/HANA_2_00_055_v1_install.rsp.j2)
- 1. [S41909SPS03_v0011ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-app-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-dbload-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-ers-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-generic-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-pas-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scs-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2)
- 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2)
+ - For S/4HANA 1909 SPS 03,
+ 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/HANA_2_00_055_v1_install.rsp.j2)
+ 1. [S41909SPS03_v0011ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-app-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-dbload-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-ers-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-generic-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-pas-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scs-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-scsha-inifile-param.j2)
+ 1. [S41909SPS03_v0011ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/templates/S41909SPS03_v0011ms-web-inifile-param.j2)
+
+ - For S/4 HANA 2020 SPS 03,
+ 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_055_v1_install.rsp.j2)
+ 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/HANA_2_00_install.rsp.j2)
+ 1. [S42020SPS03_v0003ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-app-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-dbload-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-ers-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-generic-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-pas-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scs-inifile-param.j2)
+ 1. [S42020SPS03_v0003ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/templates/S42020SPS03_v0003ms-scsha-inifile-param.j2)
+
+ - For SAP S/4HANA 2021 ISS 00,
+ 1. [HANA_2_00_055_v1_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_055_v1_install.rsp.j2)
+ 1. [HANA_2_00_install.rsp.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/HANA_2_00_install.rsp.j2)
+ 1. [NW_ABAP_ASCS_S4HANA2021.CORE.HDB.AB](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_ASCS_S4HANA2021.CORE.HDB.ABAP_Distributed.params)
+ 1. [NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_CI-S4HANA2021.CORE.HDB.ABAP_Distributed.params)
+ 1. [NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_ABAP_DB-S4HANA2021.CORE.HDB.ABAP_Distributed.params)
+ 1. [NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_DI-S4HANA2021.CORE.HDB.PD_Distributed.params)
+ 1. [NW_Users_Create-GENERIC.HDB.PD_Distributed.params](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/NW_Users_Create-GENERIC.HDB.PD_Distributed.params)
+ 1. [S4HANA_2021_ISS_v0001ms-app-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-app-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-dbload-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-ers-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-generic-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-pas-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scs-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-scsha-inifile-param.j2)
+ 1. [S4HANA_2021_ISS_v0001ms-web-inifile-param.j2](https://raw.githubusercontent.com/Azure/sap-automation/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/templates/S4HANA_2021_ISS_v0001ms-web-inifile-param.j2)
+
1. Go back to the **sapfiles** folder, then go to the **archives** subfolder.
-1. Download all packages that aren't labeled as `download: false` in the [S/4HANA 1909 BOM](https://github.com/Azure/sap-automation/blob/BPaaS-preview/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml). You can use the URL given in the BOM to download each package. Make sure to download the exact package versions listed in each BOM. Repeat this step for the main and dependent BOM files.
- 1. [HANA_2_00_059_v0002ms.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/HANA_2_00_059_v0002ms/HANA_2_00_059_v0002ms.yaml)
- 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
- 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/main/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+1. Download all packages that aren't labeled as `download: false` in the main BOM URL shown below. You can use the URL mentioned in the BOM to download each package. Make sure to download the exact package versions listed in each BOM. Repeat this step for the main and dependent BOM files.
+ - For S/4HANA 1909 SPS 03,
+ 1. [S41909SPS03_v0011ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S41909SPS03_v0011ms/S41909SPS03_v0011ms.yaml)
+ 1. [HANA_2_00_059_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_059_v0003ms/HANA_2_00_059_v0003ms.yaml)
+ 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
+ 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+
+ - For S/4 HANA 2020 SPS 03,
+ 1. [S42020SPS03_v0003ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S42020SPS03_v0003ms/S42020SPS03_v0003ms.yaml)
+ 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
+ 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+
+ - For SAP S/4HANA 2021 ISS 00,
+ 1. [S4HANA_2021_ISS_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/S4HANA_2021_ISS_v0001ms/S4HANA_2021_ISS_v0001ms.yaml)
+ 1. [HANA_2_00_063_v0001ms.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/HANA_2_00_063_v0001ms/HANA_2_00_063_v0001ms.yaml)
+ 1. [SWPM20SP12_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SWPM20SP12_latest/SWPM20SP12_latest.yaml)
+ 1. [SUM20SP14_latest.yaml](https://github.com/Azure/sap-automation/blob/experimental/deploy/ansible/BOM-catalog/SUM20SP14_latest/SUM20SP14_latest.yaml)
+
1. Upload all the packages that you downloaded to the **archives** folder. Don't rename the files. 1. Optionally, you can install other packages that aren't required. 1. Download the package files. 1. Upload the files to the **archives** folder.
- 1. Open the `S41909SPS03_v0011ms` YAML file for the BOM.
+ 1. Open the `S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms` YAML file for the BOM.
1. Edit the information for each optional package to `download:true`.
- 1. Save the YAML file.
+ 1. Save the YAML file and reupload the yaml file. There shall be only one yaml file in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the "boms" folder
Now, you can [install the SAP software](#install-software) using the installation wizard.
To install the SAP software on Azure, use the ACSS installation wizard.
1. For **Have you uploaded the software to an Azure storage account?**, select **Yes**.
- 1. For **Software version**, use the default **SAP S/4HANA 1909 SPS03**.
+ 1. For **Software version**, use the **SAP S/4HANA 1909 SPS03** or **SAP S/4HANA 2020 SPS 03** or **SAP S/4HANA 2021 ISS 00** . Please note only those versions will light up which are supported with the OS version that was used to deploy the infrastructure previously.
1. For **BOM directory location**, select **Browse** and find the path to your BOM file. For example, `/sapfiles/boms/S41909SPS03_v0010ms.yaml`.
If you encounter this problem, follow these steps:
- `permissions` to `0755` - `url` to the new SAP download URL
-1. Reupload the BOM file(s) in the `boms` folder of the storage account.
+1. Reupload the BOM file(s) in the subfolder (`S41909SPS03_v0011ms` or `S42020SPS03_v0003ms` or `S4HANA_2021_ISS_v0001ms`) of the "boms" folder
## Next steps
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Creating a great custom neural voice requires careful quality control in each st
### Persona design
-First, [design a persona](/record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
+First, [design a persona](record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning.
### Script selection
-Carefully [select the recording script](/record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
+Carefully [select the recording script](record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
### Preparing training data
To learn how to use Custom Neural Voice responsibly, check the following article
## Next steps > [!div class="nextstepaction"]
-> [Create a Project](how-to-custom-voice.md)
+> [Create a Project](how-to-custom-voice.md)
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
When you're ready to create a custom Text-to-Speech voice for your application,
## Voice talent verbal statement
-Before you can train your own Text-to-Speech voice model, you'll need [audio recordings](/record-custom-voice-samples.md) and the [associated text transcriptions](/how-to-custom-voice-prepare-data.md#types-of-training-data). On this page, we'll review data types, how they're used, and how to manage each.
+Before you can train your own Text-to-Speech voice model, you'll need [audio recordings](record-custom-voice-samples.md) and the [associated text transcriptions](how-to-custom-voice-prepare-data.md#types-of-training-data). On this page, we'll review data types, how they're used, and how to manage each.
> [!IMPORTANT] > To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
Files should be grouped by type into a dataset and uploaded as a zip file. Each
## Individual utterances + matching transcript
-You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](/speech-service/record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
+You can prepare recordings of individual utterances and the matching transcript in two ways. Either [write a script and have it read by a voice talent](record-custom-voice-samples.md) or use publicly available audio and transcribe it to text. If you do the latter, edit disfluencies from the audio files, such as "um" and other filler sounds, stutters, mumbled words, or mispronunciations.
To produce a good voice model, create the recordings in a quiet room with a high-quality microphone. Consistent volume, speaking rate, speaking pitch, and expressive mannerisms of speech are essential.
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
Title: "Cognitive Services for Big Data"
-description: Learn how to leverage Azure Cognitive Services on large datasets using Python, Java, and Scala. With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations.
+ Title: "Cognitive Services for big data"
+description: Learn how to leverage Azure Cognitive Services on large datasets using Python, Java, and Scala. With Cognitive Services for big data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations.
Last updated 10/28/2021
-# Azure Cognitive Services for Big Data
+# Azure Cognitive Services for big data
-![Azure Cognitive Services for Big Data](media/cognitive-services-big-data-overview.svg)
+![Azure Cognitive Services for big data](media/cognitive-services-big-data-overview.svg)
-The Azure Cognitive Services for Big Data lets users channel terabytes of data through Cognitive Services using [Apache Spark&trade;](/dotnet/spark/what-is-spark). With the Cognitive Services for Big Data, it's easy to create large-scale intelligent applications with any datastore.
+Azure Cognitive Services for big data lets users channel terabytes of data through Cognitive Services using [Apache Spark&trade;](/dotnet/spark/what-is-spark) and open source libraries for distributed machine learning workloads. With Cognitive Services for big data, it's easy to create large-scale intelligent applications with any datastore.
-With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications.
+Using the resources and libraries described in this article, you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications.
## Features and benefits
-Cognitive Services for Big Data can use services from any region in the world, as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
+Cognitive Services for big data can use resources from any [supported region](https://azure.microsoft.comglobal-infrastructure/services/?products=cognitive-services), as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
## Supported services
-[Cognitive Services](../index.yml), accessed through APIs and SDKs, help developers build intelligent applications without having AI or data science skills. With Cognitive Services you can make your applications see, hear, speak, understand, and reason. To use the Cognitive Services, your application must send data to the service over the network. Once received, the service sends an intelligent response in return. The following services are available for big data workloads:
+[Cognitive Services](../index.yml), accessed through APIs and SDKs, help developers build intelligent applications without having AI or data science skills. With Cognitive Services you can make your applications see, hear, speak, understand, and reason. To use Cognitive Services, your application must send data to the service over the network. Once received, the service sends an intelligent response in return. The following Cognitive Services resources are available for big data workloads:
### Vision
Cognitive Services for Big Data can use services from any region in the world, a
|:--|:| |[Bing Image Search](/azure/cognitive-services/bing-image-search "Bing Image Search")|The Bing Image Search service returns a display of images determined to be relevant to the user's query.|
-## Supported programming languages for Cognitive Services for Big Data
+## Supported programming languages for Cognitive Services for big data
-The Cognitive Services for Big Data are built on Apache Spark. Apache Spark is a distributed computing library that supports Java, Scala, Python, R, and many other languages. These languages are currently supported.
+Cognitive Services for big data are built on Apache Spark. Apache Spark is a distributed computing library that supports Java, Scala, Python, R, and many other languages. See [SynapseML](https://microsoft.github.io/SynapseML) for documentation, samples, and blog posts.
+
+The following languages are currently supported.
### Python
-We provide a PySpark API in the `mmlspark.cognitive` namespace of [Microsoft ML for Apache Spark](https://aka.ms/spark). For more information, see the [Python Developer API](https://mmlspark.blob.core.windows.net/docs/1.0.0-rc1/pyspark/mmlspark.cognitive.html). For usage examples, see the [Python Samples](samples-python.md).
+We provide a PySpark API for current and legacy libraries:
+
+* [`synapseml.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.10.0/pyspark/synapse.ml.cognitive.html)
+
+* [`mmlspark.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.18.1/pyspark/modules.html)
+
+For more information, see the [Python Developer API](https://mmlspark.blob.core.windows.net/docs/1.0.0-rc1/pyspark/mmlspark.cognitive.html). For usage examples, see the [Python Samples](samples-python.md).
### Scala and Java
-We provide a Scala and Java-based Spark API in the `com.microsoft.ml.spark.cognitive` namespace of [Microsoft ML for Apache Spark](https://aka.ms/spark). For more information, see the [Scala Developer API](https://mmlspark.blob.core.windows.net/docs/1.0.0-rc1/scal).
+We provide a Scala and Java-based Spark API for current and legacy libraries:
+
+* [`com.microsoft.synapseml.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.10.0/scala/com/microsoft/azure/synapse/ml/cognitive/https://docsupdatetracker.net/index.html)
+
+* [`com.microsoft.ml.spark.cognitive`](https://mmlspark.blob.core.windows.net/docs/0.18.1/scala/https://docsupdatetracker.net/index.html#com.microsoft.ml.spark.cognitive.package)
+
+For more information, see the [Scala Developer API](https://mmlspark.blob.core.windows.net/docs/1.0.0-rc1/scal).
## Supported platforms and connectors
-The Cognitive Services for Big Data requires Apache Spark. There are several Apache Spark platforms that support the Cognitive Services for Big Data.
+Big data scenarios require Apache Spark. There are several Apache Spark platforms that support Cognitive Services for big data.
### Azure Databricks
The basis of Spark is the DataFrame: a tabular collection of data distributed ac
- Do SQL-style computations such as join and filter tables. - Apply functions to large datasets using MapReduce style parallelism. - Apply Distributed Machine Learning using Microsoft Machine Learning for Apache Spark.
- - Use the Cognitive Services for Big Data to enrich your data with ready-to-use intelligent services.
+ - Use Cognitive Services for big data to enrich your data with ready-to-use intelligent services.
### Microsoft Machine Learning for Apache Spark (MMLSpark)
-[Microsoft Machine Learning for Apache Spark](https://mmlspark.blob.core.windows.net/website/https://docsupdatetracker.net/index.html#install) (MMLSpark) is an open-source, distributed machine learning library (ML) built on Apache Spark. The Cognitive Services for Big Data is included in this package. Additionally, MMLSpark contains several other ML tools for Apache Spark, such as LightGBM, Vowpal Wabbit, OpenCV, LIME, and more. With MMLSpark, you can build powerful predictive and analytical models from any Spark datasource.
+[Microsoft Machine Learning for Apache Spark](https://mmlspark.blob.core.windows.net/website/https://docsupdatetracker.net/index.html#install) (MMLSpark) is an open-source, distributed machine learning library (ML) built on Apache Spark. Cognitive Services for big data is included in this package. Additionally, MMLSpark contains several other ML tools for Apache Spark, such as LightGBM, Vowpal Wabbit, OpenCV, LIME, and more. With MMLSpark, you can build powerful predictive and analytical models from any Spark datasource.
### HTTP on Spark
-Cognitive Services for Big Data is an example of how we can integrate intelligent web services with big data. Web services power many applications across the globe and most services communicate through the Hypertext Transfer Protocol (HTTP). To work with *arbitrary* web services at large scales, we provide HTTP on Spark. With HTTP on Spark, you can pass terabytes of data through any web service. Under the hood, we use this technology to power Cognitive Services for Big Data.
+Cognitive Services for big data is an example of how we can integrate intelligent web services with big data. Web services power many applications across the globe and most services communicate through the Hypertext Transfer Protocol (HTTP). To work with *arbitrary* web services at large scales, we provide HTTP on Spark. With HTTP on Spark, you can pass terabytes of data through any web service. Under the hood, we use this technology to power Cognitive Services for big data.
## Developer samples
Cognitive Services for Big Data is an example of how we can integrate intelligen
- [The Azure Cognitive Services on Spark: Clusters with Embedded Intelligent Services](https://databricks.com/session/the-azure-cognitive-services-on-spark-clusters-with-embedded-intelligent-services) - [Spark Summit Keynote: Scalable AI for Good](https://databricks.com/session_eu19/scalable-ai-for-good)-- [The Cognitive Services for Big Data in Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)
+- [Cognitive Services for big data in Cosmos DB](https://medius.studios.ms/Embed/Video-nc/B19-BRK3004?latestplayer=true&l=2571.208093)
- [Lightning Talk on Large Scale Intelligent Microservices](https://www.youtube.com/watch?v=BtuhmdIy9Fk&t=6s) ## Next steps -- [Getting Started with the Cognitive Services for Big Data](getting-started.md)
+- [Getting Started with Cognitive Services for big data](getting-started.md)
- [Simple Python Examples](samples-python.md) - [Simple Scala Examples](samples-scala.md)
cognitive-services Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/getting-started.md
Title: "Get started with Cognitive Services for Big Data"
-description: Set up your MMLSpark pipeline with Cognitive Services in Azure Databricks and run a sample.
+ Title: "Get started with Cognitive Services for big data"
+description: Set up your SynapseML or MMLSpark pipeline with Cognitive Services in Azure Databricks and run a sample.
Previously updated : 10/28/2021 Last updated : 08/16/2022 ms.devlang: python
Setting up your environment is the first step to building a pipeline for your data. After your environment is ready, running a sample is quick and easy.
-In this article, we'll perform these steps to get you started:
+In this article, you'll perform these steps to get started:
-1. [Create a Cognitive Services resource](#create-a-cognitive-services-resource)
-1. [Create an Apache Spark Cluster](#create-an-apache-spark-cluster)
-1. [Try a sample](#try-a-sample)
+> [!div class="checklist"]
+> * [Create a Cognitive Services resource](#create-a-cognitive-services-resource)
+> * [Create an Apache Spark cluster](#create-an-apache-spark-cluster)
+> * [Try a sample](#try-a-sample)
## Create a Cognitive Services resource
-To use the Big Data Cognitive Services, you must first create a Cognitive Service for your workflow. There are two main types of Cognitive
+To work with big data in Cognitive Services, first create a Cognitive Services resource for your workflow. There are two main types of Cognitive
### Cloud services
Follow [this guide](../cognitive-services-container-support.md?tabs=luis) to cre
## Create an Apache Spark cluster
-[Apache Spark&trade;](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the Big Data Cognitive Services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
+[Apache Spark&trade;](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the big data Cognitive Services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
### Azure Databricks
-Azure Databricks is an Apache Spark-based analytics platform with a one-click setup, streamlined workflows, and an interactive workspace. It's often used to collaborate between data scientists, engineers, and business analysts. To use the Big Data Cognitive Services on Azure Databricks, follow these steps:
+Azure Databricks is an Apache Spark-based analytics platform with a one-click setup, streamlined workflows, and an interactive workspace. It's often used to collaborate between data scientists, engineers, and business analysts. To use the big data Cognitive Services on Azure Databricks, follow these steps:
1. [Create an Azure Databricks workspace](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-an-azure-databricks-workspace)+ 1. [Create a Spark cluster in Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal#create-a-spark-cluster-in-databricks)
-1. Install the Big Data Cognitive Services
+
+1. Install the SynapseML open-source library (or MMLSpark library if you're supporting a legacy application):
+ * Create a new library in your databricks workspace <img src="media/create-library.png" alt="Create library" width="50%"/>
- * Input the following maven coordinates
+
+ * For SynapseML: input the following maven coordinates
+ Coordinates: `com.microsoft.azure:synapseml_2.12:0.10.0`
+ Repository: default
+
+ * For MMLSpark (legacy): input the following maven coordinates
Coordinates: `com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3` Repository: `https://mmlspark.azureedge.net/maven` <img src="media/library-coordinates.png" alt="Library Coordinates" width="50%"/>+ * Install the library onto a cluster <img src="media/install-library.png" alt="Install Library on Cluster" width="50%"/>
Azure Databricks is an Apache Spark-based analytics platform with a one-click se
Optionally, you can use Synapse Analytics to create a spark cluster. Azure Synapse Analytics brings together enterprise data warehousing and big data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources at scale. To get started using Azure Synapse Analytics, follow these steps: 1. [Create a Synapse Workspace (preview)](../../synapse-analytics/quickstart-create-workspace.md).+ 1. [Create a new serverless Apache Spark pool (preview) using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md).
-In Azure Synapse Analytics, Big Data for Cognitive Services is installed by default.
+In Azure Synapse Analytics, big data for Cognitive Services is installed by default.
### Azure Kubernetes Service
If you're using containerized Cognitive Services, one popular option for deployi
To get started on Azure Kubernetes Service, follow these steps: 1. [Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md)+ 1. [Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark)+ 1. [Install a cognitive service container using Helm](../computer-vision/deploy-computer-vision-on-premises.md) ## Try a sample
-After you set up your Spark cluster and environment, you can run a short sample. This section demonstrates how to use the Big Data for Cognitive Services in Azure Databricks.
+After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](/search/search-synapseml-cognitive-services).
First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
First, you can create a notebook in Azure Databricks. For other Spark cluster pr
<img src="media/new-notebook.png" alt="Create a new notebook" width="50%"/>
-1. In the **Create Notebook** dialog box, enter a name, select **Python** as the language, and select the Spark cluster that you created earlier.
+1. In the **Create Notebook**, enter a name, select **Python** as the language, and select the Spark cluster that you created earlier.
<img src="media/databricks-notebook-details.jpg" alt="New notebook details" width="50%"/>
First, you can create a notebook in Azure Databricks. For other Spark cluster pr
1. Paste this code snippet into your new notebook.
-```python
-from mmlspark.cognitive import *
-from pyspark.sql.functions import col
-
-# Add your subscription key from the Language service (or a general Cognitive Service key)
-service_key = "ADD-SUBSCRIPTION-KEY-HERE"
-
-df = spark.createDataFrame([
- ("I am so happy today, its sunny!", "en-US"),
- ("I am frustrated by this rush hour traffic", "en-US"),
- ("The cognitive services on spark aint bad", "en-US"),
-], ["text", "language"])
-
-sentiment = (TextSentiment()
- .setTextCol("text")
- .setLocation("eastus")
- .setSubscriptionKey(service_key)
- .setOutputCol("sentiment")
- .setErrorCol("error")
- .setLanguageCol("language"))
-
-results = sentiment.transform(df)
-
-# Show the results in a table
-display(results.select("text", col("sentiment")[0].getItem("score").alias("sentiment")))
-
-```
+ ```python
+ from mmlspark.cognitive import *
+ from pyspark.sql.functions import col
+
+ # Add your region and subscription key from the Language service (or a general Cognitive Service key)
+ # If using a multi-region Cognitive Services resource, delete the placeholder text: service_region = ""
+ service_key = "ADD-SUBSCRIPTION-KEY-HERE"
+ service_region = "ADD-SERVICE-REGION-HERE"
+
+ df = spark.createDataFrame([
+ ("I am so happy today, its sunny!", "en-US"),
+ ("I am frustrated by this rush hour traffic", "en-US"),
+ ("The cognitive services on spark aint bad", "en-US"),
+ ], ["text", "language"])
+
+ sentiment = (TextSentiment()
+ .setTextCol("text")
+ .setLocation(service_region)
+ .setSubscriptionKey(service_key)
+ .setOutputCol("sentiment")
+ .setErrorCol("error")
+ .setLanguageCol("language"))
+
+ results = sentiment.transform(df)
+
+ # Show the results in a table
+ display(results.select("text", col("sentiment")[0].getItem("score").alias("sentiment")))
+ ```
+
+1. Get your region and subscription key from the **Keys and Endpoint** menu from your Language resource in the Azure portal.
+
+1. Replace the region and subscription key placeholders in your Databricks notebook code with values that are valid for your resource.
-1. Get your subscription key from the **Keys and Endpoint** menu from your Language resource in the Azure portal.
-1. Replace the subscription key placeholder in your Databricks notebook code with your subscription key.
1. Select the play, or triangle, symbol in the upper right of your notebook cell to run the sample. Optionally, select **Run All** at the top of your notebook to run all cells. The answers will display below the cell in a table. ### Expected results
cognitive-services Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/recipes/anomaly-detection.md
Title: "Recipe: Predictive maintenance with the Cognitive Services for Big Data"
+ Title: "Recipe: Predictive maintenance with the Cognitive Services for big data"
-description: This quickstart shows how to perform distributed anomaly detection with the Cognitive Services for Big Data
+description: This quickstart shows how to perform distributed anomaly detection with the Cognitive Services for big data
ms.devlang: python
-# Recipe: Predictive maintenance with the Cognitive Services for Big Data
+# Recipe: Predictive maintenance with the Cognitive Services for big data
-This recipe shows how you can use Azure Synapse Analytics and Cognitive Services on Apache Spark for predictive maintenance of IoT devices. We'll follow along with the [CosmosDB and Synapse Link](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples) sample. To keep things simple, in this recipe we'll read the data straight from a CSV file rather than getting streamed data through CosmosDB and Synapse Link. We strongly encourage you to look over the Synapse Link sample.
+This recipe shows how you can use Azure Synapse Analytics and Cognitive Services on Apache Spark for predictive maintenance of IoT devices. We'll follow along with the [Cosmos DB and Synapse Link](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples) sample. To keep things simple, in this recipe we'll read the data straight from a CSV file rather than getting streamed data through Cosmos DB and Synapse Link. We strongly encourage you to look over the Synapse Link sample.
## Hypothetical scenario
If successful, your output will look like this:
## Next steps
-Learn how to do predictive maintenance at scale with Azure Cognitive Services, Azure Synapse Analytics, and Azure CosmosDB. For more information, see the full sample on [GitHub](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples).
+Learn how to do predictive maintenance at scale with Azure Cognitive Services, Azure Synapse Analytics, and Azure Cosmos DB. For more information, see the full sample on [GitHub](https://github.com/Azure-Samples/cosmosdb-synapse-link-samples).
cognitive-services Art Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/recipes/art-explorer.md
Title: "Recipe: Intelligent Art Exploration with the Cognitive Services for Big Data"
+ Title: "Recipe: Intelligent Art Exploration with the Cognitive Services for big data"
description: This recipe shows how to create a searchable art database using Azure Search and MMLSpark.
ms.devlang: python
-# Recipe: Intelligent Art Exploration with the Cognitive Services for Big Data
+# Recipe: Intelligent Art Exploration with the Cognitive Services for big data
-In this example, we'll use the Cognitive Services for Big Data to add intelligent annotations to the Open Access collection from the Metropolitan Museum of Art (MET). This will enable us to create an intelligent search engine using Azure Search even without manual annotations.
+In this example, we'll use the Cognitive Services for big data to add intelligent annotations to the Open Access collection from the Metropolitan Museum of Art (MET). This will enable us to create an intelligent search engine using Azure Search even without manual annotations.
## Prerequisites
requests.post(url, json={"search": "Glass"}, headers = {"api-key": AZURE_SEARCH_
## Next steps
-Learn how to use [Cognitive Services for Big Data for Anomaly Detection](anomaly-detection.md).
+Learn how to use [Cognitive Services for big data for Anomaly Detection](anomaly-detection.md).
cognitive-services Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-python.md
Title: "Cognitive Services for Big Data - Python Samples"
+ Title: "Cognitive Services for big data - Python Samples"
description: Try Cognitive Services samples in Python for Azure Databricks to run your MMLSpark pipeline for big data. Previously updated : 10/28/2021 Last updated : 08/16/2022 ms.devlang: python
-# Python Samples for Cognitive Services for Big Data
+# Python Samples for Cognitive Services for big data
The following snippets are ready to run and will help get you started with using Cognitive Services on Spark with Python.
display(client.transform(df).select("country", udf(get_response_body)(col("respo
| country | response | |:-|:-|
-| br | [{"page":1,"pages":1,"per_page":"50","total":1},[{"id":"BRA","iso2Code":"BR","name":"Brazil","region":{"id":"LCN","iso2code":"ZJ","value":"Latin America & Caribbean "},"adminregion":{"id":"LAC","iso2code":"XJ","value":"Latin America & Caribbean (excluding high income)"},"incomeLevel":{"id":"UMC","iso2code":"XT","value":"Upper middle income"},"lendingType":{"id":"IBD","iso2code":"XF","value":"IBRD"},"capitalCity":"Brasilia","longitude":"-47.9292","latitude":"-15.7801"}]] |
-| usa | [{"page":1,"pages":1,"per_page":"50","total":1},[{"id":"USA","iso2Code":"US","name":"United States","region":{"id":"NAC","iso2code":"XU","value":"North America"},"adminregion":{"id":"","iso2code":"","value":""},"incomeLevel":{"id":"HIC","iso2code":"XD","value":"High income"},"lendingType":{"id":"LNX","iso2code":"XX","value":"Not classified"},"capitalCity":"Washington D.C.","longitude":"-77.032","latitude":"38.8895"}]] |
+| br | `[{"page":1,"pages":1,"per_page":"50","total":1},[{"id":"BRA","iso2Code":"BR","name":"Brazil","region":{"id":"LCN","iso2code":"ZJ","value":"Latin America & Caribbean "},"adminregion":{"id":"LAC","iso2code":"XJ","value":"Latin America & Caribbean (excluding high income)"},"incomeLevel":{"id":"UMC","iso2code":"XT","value":"Upper middle income"},"lendingType":{"id":"IBD","iso2code":"XF","value":"IBRD"},"capitalCity":"Brasilia","longitude":"-47.9292","latitude":"-15.7801"}]]` |
+| usa | `[{"page":1,"pages":1,"per_page":"50","total":1},[{"id":"USA","iso2Code":"US","name":"United States","region":{"id":"NAC","iso2code":"XU","value":"North America"},"adminregion":{"id":"","iso2code":"","value":""},"incomeLevel":{"id":"HIC","iso2code":"XD","value":"High income"},"lendingType":{"id":"LNX","iso2code":"XX","value":"Not classified"},"capitalCity":"Washington D.C.","longitude":"-77.032","latitude":"38.8895"}]]` |
## See also
cognitive-services Samples Scala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-scala.md
Title: "Cognitive Services for Big Data Scala Samples"
+ Title: "Cognitive Services for big data Scala Samples"
description: Use Cognitive Services for Azure Databricks to run your MMLSpark pipeline for big data.
cognitive-services Cognitive Services Development Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-development-options.md
Cognitive Services are organized into four categories: Decision, Language, Speec
* Automation and integration tools like Logic Apps and Power Automate. * Deployment options such as Azure Functions and the App Service. * Cognitive Services Docker containers for secure access.
-* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for Big Data scenarios.
+* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for big data scenarios.
Before we jump in, it's important to know that the Cognitive Services are primarily used for two distinct tasks. Based on the task you want to perform, you have different development and deployment options to choose from.
Cognitive Services client libraries and REST APIs provide you direct access to y
If you want to learn more about available client libraries and REST APIs, use our [Cognitive Services overview](index.yml) to pick a service and get started with one of our quickstarts for vision, decision, language, and speech.
-### Cognitive Services for Big Data
+### Cognitive Services for big data
-With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications. Cognitive Services for Big Data support the following platforms and connectors: Azure Databricks, Azure Synapse, Azure Kubernetes Service, and Data Connectors.
+With Cognitive Services for big data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications. Cognitive Services for big data support the following platforms and connectors: Azure Databricks, Azure Synapse, Azure Kubernetes Service, and Data Connectors.
* **Target user(s)**: Data scientists and data engineers
-* **Benefits**: The Azure Cognitive Services for Big Data let users channel terabytes of data through Cognitive Services using Apache Spark&trade;. It's easy to create large-scale intelligent applications with any datastore.
+* **Benefits**: The Azure Cognitive Services for big data let users channel terabytes of data through Cognitive Services using Apache Spark&trade;. It's easy to create large-scale intelligent applications with any datastore.
* **UI**: N/A - Code only * **Subscription(s)**: Azure account + Cognitive Services resources
-If you want to learn more about Big Data for Cognitive Services, a good place to start is with the [overview](./big-dat) samples.
+If you want to learn more about big data for Cognitive Services, a good place to start is with the [overview](./big-dat) samples.
### Azure Functions and Azure Service Web Jobs
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
Submit a registration form for each Limited Access service you would like to use
### How long will the registration process take?
-Review may take 5-10 business days. You'll receive an email as soon as your registration form is reviewed.
+Review may take 5-10 business days. You will receive an email as soon as your application is reviewed.
### Who is eligible to use Limited Access services?
-Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their registration.
+Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their registration form.
-Use an email address affiliated with your organization in your registration. Registration submitted with personal email addresses will be denied.
+Please use an email address affiliated with your organization in your registration form. Registration forms submitted with personal email addresses will be denied.
-If you aren't a managed customer, we invite you to submit a registration using the same forms and we'll reach out to you about any opportunities to join an eligibility program.
+If you're not a managed customer, we invite you to submit an application using the same forms and we will reach out to you about any opportunities to join an eligibility program.
-### What is a managed customer? What if I donΓÇÖt know whether IΓÇÖm a managed customer?
+### What is a managed customer? What if I don't know whether I'm a managed customer?
-Managed customers work with Microsoft account teams. We invite you to submit a registration form for the features youΓÇÖd like to use, and weΓÇÖll verify your eligibility for access. We are not able to accept requests to become a managed customer at this time.
+Managed customers work with Microsoft account teams. We invite you to submit a registration form for the features you'd like to use, and we'll verify your eligibility for access. We are not able to accept requests to become a managed customer at this time.
-### What happens if IΓÇÖm an existing customer and I donΓÇÖt register?
+### What happens if I'm an existing customer and I don't register?
-Existing customers have until June 30, 2023 to submit a registration form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without approved registration, you'll be denied access after June 30, 2023.
+Existing customers have until June 30, 2023 to submit a registration form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without an approved application, you will be denied access after June 30, 2023.
The registration forms can be found here:
The registration forms can be found here:
- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature - [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
-### IΓÇÖm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to register to keep using these services?
+### I'm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to register to keep using these services?
-WeΓÇÖre always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you've previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new registration form to continue using these services beyond June 30, 2023.
+We're always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you've previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new registration form to continue using these services beyond June 30, 2023.
-If you were an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit a registration form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for registration processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The registration forms can be found here:
+If you're an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit a registration form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for application processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The registration forms can be found here:
- [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
Search [here](https://azure.microsoft.com/global-infrastructure/services/) for a
Detailed information about supported regions for Custom Neural Voice and Speaker Recognition operations can be found [here](./speech-service/regions.md).
-### What happens to my data if my registration is denied?
+### What happens to my data if my application is denied?
-If you are an existing customer and your registration for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to MicrosoftΓÇÖs data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
+If you're an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to Microsoft's data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
## Help and support
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
training_file_name = 'training.jsonl'
validation_file_name = 'validation.jsonl' sample_data = [{"prompt": "When I go to the store, I want an", "completion": "apple"},
- {"prompt": "When I go to work, I want a", "completion": "coffe"},
+ {"prompt": "When I go to work, I want a", "completion": "coffee"},
{"prompt": "When I go home, I want a", "completion": "soda"}] print(f'Generating the training file: {training_file_name}')
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
With Azure and Cognitive Services, you have access to several development option
* Automation and integration tools like Logic Apps and Power Automate. * Deployment options such as Azure Functions and the App Service. * Cognitive Services Docker containers for secure access.
-* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for Big Data scenarios.
+* Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for big data scenarios.
To learn more, see [Cognitive Services development options](./cognitive-services-development-options.md).
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Previously updated : 5/10/2022 Last updated : 6/17/2022 # Disaster recovery guidance for Azure Container Apps
-Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) where offered to provide high-availability protection for your applications and data from data center failures.
+Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
+By enabling Container Apps' zone redundancy feature, replicas are automatically randomly distributed across the zones in the region. Traffic is load balanced among the replicas. If a zone outage occurs, traffic will automatically be routed to the replicas in the remaining zones.
+ In the unlikely event of a full region outage, you have the option of using one of two strategies: - **Manual recovery**: Manually deploy to a new region, or wait for the region to recover, and then manually redeploy all environments and apps. -- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. See [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md) for more information.
+- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. For more information, see [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md).
> [!NOTE] > Regardless of which strategy you choose, make sure your deployment configuration files are in source control so you can easily redeploy if necessary.
In the unlikely event of a full region outage, you have the option of using one
Additionally, the following resources can help you create your own disaster recovery plan: - [Failure and disaster recovery for Azure applications](/azure/architecture/reliability/disaster-recovery)-- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)
+- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)
+
+## Set up zone redundancy in your Container Apps environment
+
+To take advantage of availability zones, you must enable zone redundancy when you create the Container Apps environment. The environment must include a virtual network (VNET) with an infrastructure subnet. To ensure proper distribution of replicas, you should configure your app's minimum and maximum replica count with values that are divisible by three. The minimum replica count should be at least three.
+
+### Enabled zone redundancy via the Azure portal
+
+To create a container app in an environment with zone redundancy enabled using the Azure portal:
+
+1. Navigate to the Azure portal.
+1. Search for **Container Apps** in the top search box.
+1. Select **Container Apps**.
+1. Select **Create New** in the *Container Apps Environment* field to open the *Create Container Apps Environment* panel.
+1. Enter the environment name.
+1. Select **Enabled** for the *Zone redundancy* field.
+
+Zone redundancy requires a virtual network (VNET) with an infrastructure subnet. You can choose an existing VNET or create a new one. When creating a new VNET, you can accept the values provided for you or customize the settings.
+
+1. Select the **Networking** tab.
+1. To assign a custom VNET name, select **Create New** in the *Virtual Network* field.
+1. To assign a custom infrastructure subnet name, select **Create New** in the *Infrastructure subnet* field.
+1. You can select **Internal** or **External** for the *Virtual IP*.
+1. Select **Create**.
++
+### Enable zone redundancy with the Azure CLI
+
+Create a VNET and infrastructure subnet to include with the Container Apps environment.
+
+When using these commands, replace the `<PLACEHOLDERS>` with your values.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az network vnet create \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --name <VNET_NAME> \
+ --location <LOCATION> \
+ --address-prefix 10.0.0.0/16
+```
+
+```azurecli
+az network vnet subnet create \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --vnet-name <VNET_NAME> \
+ --name infrastructure \
+ --address-prefixes 10.0.0.0/23
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az network vnet create `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --name <VNET_NAME> `
+ --location <LOCATION> `
+ --address-prefix 10.0.0.0/16
+```
+
+```powershell
+az network vnet subnet create `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --vnet-name <VNET_NAME> `
+ --name infrastructure-subnet `
+ --address-prefixes 10.0.0.0/23
+```
+++
+Next, query for the infrastructure subnet ID.
+
+# [Bash](#tab/bash)
+
+```bash
+INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'`
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group <RESOURCE_GROUP_NAME> --vnet-name <VNET_NAME> --name infrastructure-subnet --query "id" -o tsv)
+```
+++
+Finally, create the environment with the `--zone-redundant` parameter. The location must be the same location used when creating the VNET.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env create \
+ --name <CONTAINER_APP_ENV_NAME> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --location "<LOCATION>" \
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET \
+ --zone-redundant
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+az containerapp env create `
+ --name <CONTAINER_APP_ENV_NAME> `
+ --resource-group <RESOURCE_GROUP_NAME> `
+ --location "<LOCATION>" `
+ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET `
+ --zone-redundant
+```
++
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
df = spark.read\
* Spark pools in Azure Synapse will represent these columns as `string`. * SQL serverless pools in Azure Synapse will represent these columns as `varchar(8000)`.
+* Properties with `UNIQUEIDENTIFIER (guid)` types are represented as `string` in analytical store and should be converted to `VARCHAR` in **SQL** or to `string` in **Spark** for correct visualization.
+ * SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. Please consider this information when designing your data architecture and modeling your transactional data. * If you rename a property, in one or many documents, it will be considered a new column. If you execute the same rename in all documents in the collection, all data will be migrated to the new column and the old column will be represented with `NULL` values.
There are two types of schema representation in the analytical store. These type
* Well-defined schema representation, default option for SQL (CORE) API accounts. * Full fidelity schema representation, default option for Azure Cosmos DB API for MongoDB accounts.
-#### Full fidelity schema for SQL API accounts
-
-It's possible to use full fidelity Schema for SQL (Core) API accounts, instead of the default option, by setting the schema type when enabling Synapse Link on a Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
-
- * This option is only valid for accounts that **don't** have Synapse Link already enabled.
- * It isn't possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
- * Currently Azure Cosmos DB API for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
- * Currently this change can't be made through the Azure portal. All database accounts that have Synapse Link enabled by the Azure portal will have the default schema representation type, well-defined schema.
-
-The schema representation type decision must be made at the same time that Synapse Link is enabled on the account, using Azure CLI or PowerShell.
-
- With the Azure CLI:
- ```cli
- az cosmosdb create --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup --subscription MySubscription --analytical-storage-schema-type "FullFidelity" --enable-analytical-storage true
- ```
-
-> [!NOTE]
-> In the command above, replace `create` with `update` for existing accounts.
-
- With the PowerShell:
- ```
- New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity"
- ```
-
-> [!NOTE]
-> In the command above, replace `New-AzCosmosDBAccount` with `Update-AzCosmosDBAccount` for existing accounts.
-
-- #### Well-defined schema representation The well-defined schema representation creates a simple tabular representation of the schema-agnostic data in the transactional store. The well-defined schema representation has the following considerations:
salary: 1000000
The leaf property `streetNo` within the nested object `address` will be represented in the analytical store schema as a column `address.object.streetNo.int32`. The datatype is added as a suffix to the column. This way, if another document is added to the transactional store where the value of leaf property `streetNo` is "123" (note it's a string), the schema of the analytical store automatically evolves without altering the type of a previously written column. A new column added to the analytical store as `address.object.streetNo.string` where this value of "123" is stored.
-**Data type to suffix map**
+##### Data type to suffix map
Here's a map of all the property data types and their suffix representations in the analytical store:
Here's a map of all the property data types and their suffix representations in
* Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
+##### Working with the MongoDB `_id` field
+
+the MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, `Full Fidelity Schema` will preserve its characteristics, creating a challenge for its vizualiation in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below:
+
+###### Spark
+
+```Python
+import org.apache.spark.sql.types._
+val simpleSchema = StructType(Array(
+    StructField("_id", StructType(Array(StructField("objectId",BinaryType,true)) ),true),
+    StructField("id", StringType, true)
+  ))
+
+df = spark.read.format("cosmos.olap")\
+ .option("spark.synapse.linkedService", "<enter linked service name>")\
+ .option("spark.cosmos.container", "<enter container name>")\
+ .schema(simpleSchema)
+ .load()
+
+df.select("id", "_id.objectId").show()
+```
+###### SQL
+
+```SQL
+SELECT TOP 100 id=CAST(_id as VARBINARY(1000))
+FROM OPENROWSET('CosmosDB',
+                'Your-account;Database=your-database;Key=your-key',
+                HTAP) WITH (_id VARCHAR(1000)) as HTAP
+```
+
+#### Full fidelity schema for SQL API accounts
+
+It's possible to use full fidelity Schema for SQL (Core) API accounts, instead of the default option, by setting the schema type when enabling Synapse Link on a Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type:
+
+ * This option is only valid for accounts that **don't** have Synapse Link already enabled.
+ * It isn't possible to reset the schema representation type, from well-defined to full fidelity or vice-versa.
+ * Currently Azure Cosmos DB API for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts will always have full fidelity schema representation type.
+ * Currently this change can't be made through the Azure portal. All database accounts that have Synapse Link enabled by the Azure portal will have the default schema representation type, well-defined schema.
+
+The schema representation type decision must be made at the same time that Synapse Link is enabled on the account, using Azure CLI or PowerShell.
+
+ With the Azure CLI:
+ ```cli
+ az cosmosdb create --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup --subscription MySubscription --analytical-storage-schema-type "FullFidelity" --enable-analytical-storage true
+ ```
+
+> [!NOTE]
+> In the command above, replace `create` with `update` for existing accounts.
+
+ With the PowerShell:
+ ```
+ New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity"
+ ```
+
+> [!NOTE]
+> In the command above, replace `New-AzCosmosDBAccount` with `Update-AzCosmosDBAccount` for existing accounts.
+>
## <a id="analytical-ttl"></a> Analytical Time-to-Live (TTL) Analytical TTL (ATTL) indicates how long data should be retained in your analytical store, for a container.
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
cosmos-db Database Transactions Optimistic Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/database-transactions-optimistic-concurrency.md
Optimistic concurrency control allows you to prevent lost updates and deletes. C
Every item stored in an Azure Cosmos container has a system defined `_etag` property. The value of the `_etag` is automatically generated and updated by the server every time the item is updated. `_etag` can be used with the client supplied `if-match` request header to allow the server to decide whether an item can be conditionally updated. The value of the `if-match` header matches the value of the `_etag` at the server, the item is then updated. If the value of the `if-match` request header is no longer current, the server rejects the operation with an "HTTP 412 Precondition failure" response message. The client then can re-fetch the item to acquire the current version of the item on the server or override the version of item in the server with its own `_etag` value for the item. In addition, `_etag` can be used with the `if-none-match` header to determine whether a refetch of a resource is needed.
-The itemΓÇÖs `_etag` value changes every time the item is updated. For replace item operations, `if-match` must be explicitly expressed as a part of the request options. For an example, see the sample code in [GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L676-L772). `_etag` values are implicitly checked for all written items touched by the stored procedure. If any conflict is detected, the stored procedure will roll back the transaction and throw an exception. With this method, either all or no writes within the stored procedure are applied atomically. This is a signal to the application to reapply updates and retry the original client request.
+The itemΓÇÖs `_etag` value changes every time the item is updated. For replace item operations, `if-match` must be explicitly expressed as a part of the request options. For an example, see the sample code in [GitHub](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ItemManagement/Program.cs#L791-L887). `_etag` values are implicitly checked for all written items touched by the stored procedure. If any conflict is detected, the stored procedure will roll back the transaction and throw an exception. With this method, either all or no writes within the stored procedure are applied atomically. This is a signal to the application to reapply updates and retry the original client request.
### Optimistic concurrency control and global distribution
cosmos-db Sql Api Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-python-samples.md
The [database_management.py](https://github.com/Azure/azure-sdk-for-python/blob/
| Task | API reference | | | |
-| [Create a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L48-L56) |CosmosClient.create_database|
-| [Read a database by ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L59-L67) |CosmosClient.get_database_client|
-| [Query the databases](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L32-L67) |CosmosClient.query_databases|
-| [List databases for an account](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L70-L81) |CosmosClient.list_databases|
-| [Delete a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L84-L93) |CosmosClient.delete_database|
+| [Create a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L53-L62) |CosmosClient.create_database|
+| [Read a database by ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L64-L73) |CosmosClient.get_database_client|
+| [Query the databases](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L37-L50) |CosmosClient.query_databases|
+| [List databases for an account](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L76-L87) |CosmosClient.list_databases|
+| [Delete a database](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/database_management.py#L90-L99) |CosmosClient.delete_database|
## Container examples
The [item_management.py](https://github.com/Azure/azure-sdk-for-python/blob/mast
| Task | API reference | | | |
-| [Create items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L26-L38) |container.create_item |
-| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L41-L49) |container.read_item |
-| [Read all the items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L52-L63) |container.read_all_items |
-| [Query an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L66-L78) |container.query_items |
-| [Replace an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L81-L88) |container.replace_items |
-| [Upsert an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L91-L98) |container.upsert_item |
-| [Delete an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L101-L106) |container.delete_item |
+| [Create items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L31-L43) |container.create_item |
+| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L46-L54) |container.read_item |
+| [Read all the items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L57-L68) |container.read_all_items |
+| [Query an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L71-L83) |container.query_items |
+| [Replace an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L86-L93) |container.replace_items |
+| [Upsert an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L95-L103) |container.upsert_item |
+| [Delete an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L106-L111) |container.delete_item |
| [Get the change feed of items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/change_feed_management.py) |container.query_items_change_feed | ## Indexing examples
The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/mas
| Task | API reference | | | |
-| [Exclude a specific item from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L145-L201) | documents.IndexingDirective.Exclude|
-| [Use manual indexing with specific items indexed](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L204-L263) | documents.IndexingDirective.Include |
-| [Exclude paths from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L266-L336) |Define paths to exclude in `IndexingPolicy` property |
-| [Use range indexes on strings](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L401-L485) | Define indexing policy with range indexes on string data type. `'kind': documents.IndexKind.Range`, `'dataType': documents.DataType.String`|
-| [Perform an index transformation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L488-L544) |database.replace_container (use the updated indexing policy)|
-| [Use scans when only hash index exists on the path](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L339-L398) | set the `enable_scan_in_query=True` and `enable_cross_partition_query=True` when querying the items |
+| [Exclude a specific item from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L149-L205) | documents.IndexingDirective.Exclude|
+| [Use manual indexing with specific items indexed](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L208-L267) | documents.IndexingDirective.Include |
+| [Exclude paths from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L270-L340) |Define paths to exclude in `IndexingPolicy` property |
+| [Use range indexes on strings](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L405-L490) | Define indexing policy with range indexes on string data type. `'kind': documents.IndexKind.Range`, `'dataType': documents.DataType.String`|
+| [Perform an index transformation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L492-L548) |database.replace_container (use the updated indexing policy)|
+| [Use scans when only hash index exists on the path](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L343-L402) | set the `enable_scan_in_query=True` and `enable_cross_partition_query=True` when querying the items |
## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Sql Query From https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-from.md
FROM <from_specification>
Specifies a data source, with or without an alias. If alias is not specified, it will be inferred from the `<container_expression>` using following rules: -- If the expression is a container_name, then container_name will be used as an alias.
+ - If the expression is a container_name, then container_name will be used as an alias.
-- If the expression is `<container_expression>`, then property_name, then property_name will be used as an alias. If the expression is a container_name, then container_name will be used as an alias.
+ - If the expression is `<container_expression>`, then property_name, then property_name will be used as an alias. If the expression is a container_name, then container_name will be used as an alias.
- AS `input_alias`
FROM <from_specification>
## Remarks
-All aliases provided or inferred in the `<from_source>(`s) must be unique. The Syntax `<container_expression>.`property_name is the same as `<container_expression>' ['"property_name"']'`. However, the latter syntax can be used if a property name contains a non-identifier character.
+All aliases provided or inferred in the `<from_source>`(s) must be unique. The Syntax `<container_expression> '.' property_name` is the same as `<container_expression> '[' "property_name" ']'`. However, the latter syntax can be used if a property name contains a non-identifier character.
### Handling missing properties, missing array elements, and undefined values
cost-management-billing Understand Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-usage.md
tags: billing
Previously updated : 10/11/2021 Last updated : 08/17/2022
The list covers pay-as-you-go (PAYG), Enterprise Agreement (EA), and Microsoft C
Term | Account type | Description | | AccountName | EA, PAYG | Display name of the EA enrollment account or PAYG billing account.
-AccountOwnerId<sup>1</sup> | EA, PAYG | Unique identifier for the EA enrollment account or PAYG billing account.
+AccountOwnerId┬╣ | EA, PAYG | Unique identifier for the EA enrollment account or PAYG billing account.
AdditionalInfo | All | Service-specific metadata. For example, an image type for a virtual machine.
-BillingAccountId<sup>1</sup> | All | Unique identifier for the root billing account.
+BillingAccountId┬╣ | All | Unique identifier for the root billing account.
BillingAccountName | All | Name of the billing account. BillingCurrency | All | Currency associated with the billing account. BillingPeriod | EA, PAYG | The billing period of the charge. BillingPeriodEndDate | All | The end date of the billing period. BillingPeriodStartDate | All | The start date of the billing period.
-BillingProfileId<sup>1</sup> | All | Unique identifier of the EA enrollment, PAYG subscription, MCA billing profile, or AWS consolidated account.
+BillingProfileId┬╣ | All | Unique identifier of the EA enrollment, PAYG subscription, MCA billing profile, or AWS consolidated account.
BillingProfileName | All | Name of the EA enrollment, PAYG subscription, MCA billing profile, or AWS consolidated account. ChargeType | All | Indicates whether the charge represents usage (**Usage**), a purchase (**Purchase**), or a refund (**Refund**). ConsumedService | All | Name of the service the charge is associated with.
-CostCenter<sup>1</sup> | EA, MCA | The cost center defined for the subscription for tracking costs (only available in open billing periods for MCA accounts).
+CostCenter┬╣ | EA, MCA | The cost center defined for the subscription for tracking costs (only available in open billing periods for MCA accounts).
Cost | EA, PAYG | See CostInBillingCurrency. CostInBillingCurrency | MCA | Cost of the charge in the billing currency before credits or taxes. CostInPricingCurrency | MCA | Cost of the charge in the pricing currency before credits or taxes. Currency | EA, PAYG | See BillingCurrency.
-Date<sup>1</sup> | All | The usage or purchase date of the charge.
+Date┬╣ | All | The usage or purchase date of the charge.
EffectivePrice | All | Blended unit price for the period. Blended prices average out any fluctuations in the unit price, like graduated tiering, which lowers the price as quantity increases over time. ExchangeRateDate | MCA | Date the exchange rate was established. ExchangeRatePricingToBilling | MCA | Exchange rate used to convert the cost in the pricing currency to the billing currency. Frequency | All | Indicates whether a charge is expected to repeat. Charges can either happen once (**OneTime**), repeat on a monthly or yearly basis (**Recurring**), or be based on usage (**UsageBased**). InvoiceId | PAYG, MCA | The unique document ID listed on the invoice PDF. InvoiceSection | MCA | See InvoiceSectionName.
-InvoiceSectionId<sup>1</sup> | EA, MCA | Unique identifier for the EA department or MCA invoice section.
+InvoiceSectionId┬╣ | MCA | Unique identifier for the MCA invoice section.
InvoiceSectionName | EA, MCA | Name of the EA department or MCA invoice section. IsAzureCreditEligible | All | Indicates if the charge is eligible to be paid for using Azure credits (Values: True, False). Location | MCA | Datacenter location where the resource is running. MeterCategory | All | Name of the classification category for the meter. For example, *Cloud services* and *Networking*.
-MeterId<sup>1</sup> | All | The unique identifier for the meter.
+MeterId┬╣ | All | The unique identifier for the meter.
MeterName | All | The name of the meter. MeterRegion | All | Name of the datacenter location for services priced based on location. See Location. MeterSubCategory | All | Name of the meter subclassification category.
-OfferId<sup>1</sup> | All | Name of the offer purchased.
+OfferId┬╣ | All | Name of the offer purchased.
PayGPrice | All | Retail price for the resource.
-PartNumber<sup>1</sup> | EA, PAYG | Identifier used to get specific meter pricing.
+PartNumber┬╣ | EA, PAYG | Identifier used to get specific meter pricing.
PlanName | EA, PAYG | Marketplace plan name. PreviousInvoiceId | MCA | Reference to an original invoice if this line item is a refund. PricingCurrency | MCA | Currency used when rating based on negotiated prices. PricingModel | All | Identifier that indicates how the meter is priced. (Values: On Demand, Reservation, Spot) Product | All | Name of the product.
-ProductId<sup>1</sup> | MCA | Unique identifier for the product.
+ProductId┬╣ | MCA | Unique identifier for the product.
ProductOrderId | All | Unique identifier for the product order. ProductOrderName | All | Unique name for the product order. PublisherName | All | Publisher for Marketplace services.
Quantity | All | The number of units purchased or consumed.
ReservationId | EA, MCA | Unique identifier for the purchased reservation instance. ReservationName | EA, MCA | Name of the purchased reservation instance. ResourceGroup | All | Name of the [resource group](../../azure-resource-manager/management/overview.md) the resource is in. Not all charges come from resources deployed to resource groups. Charges that do not have a resource group will be shown as null/empty, **Others**, or **Not applicable**.
-ResourceId<sup>1</sup> | All | Unique identifier of the [Azure Resource Manager](/rest/api/resources/resources) resource.
+ResourceId┬╣ | All | Unique identifier of the [Azure Resource Manager](/rest/api/resources/resources) resource.
ResourceLocation | All | Datacenter location where the resource is running. See Location. ResourceName | EA, PAYG | Name of the resource. Not all charges come from deployed resources. Charges that do not have a resource type will be shown as null/empty, **Others**, or **Not applicable**. ResourceType | MCA | Type of resource instance. Not all charges come from deployed resources. Charges that do not have a resource type will be shown as null/empty, **Others**, or **Not applicable**.
ServiceInfo1 | All | Service-specific metadata.
ServiceInfo2 | All | Legacy field with optional service-specific metadata. ServicePeriodEndDate | MCA | The end date of the rating period that defined and locked pricing for the consumed or purchased service. ServicePeriodStartDate | MCA | The start date of the rating period that defined and locked pricing for the consumed or purchased service.
-SubscriptionId<sup>1</sup> | All | Unique identifier for the Azure subscription.
+SubscriptionId┬╣ | All | Unique identifier for the Azure subscription.
SubscriptionName | All | Name of the Azure subscription.
-Tags<sup>1</sup> | All | Tags assigned to the resource. Doesn't include resource group tags. Can be used to group or distribute costs for internal chargeback. For more information, see [Organize your Azure resources with tags](https://azure.microsoft.com/updates/organize-your-azure-resources-with-tags/).
+Tags┬╣ | All | Tags assigned to the resource. Doesn't include resource group tags. Can be used to group or distribute costs for internal chargeback. For more information, see [Organize your Azure resources with tags](https://azure.microsoft.com/updates/organize-your-azure-resources-with-tags/).
Term | All | Displays the term for the validity of the offer. For example: In case of reserved instances, it displays 12 months as the Term. For one-time purchases or recurring purchases, Term is 1 month (SaaS, Marketplace Support). This is not applicable for Azure consumption. UnitOfMeasure | All | The unit of measure for billing for the service. For example, compute services are billed per hour. UnitPrice | EA, PAYG | The price per unit for the charge.
-_<sup>**1**</sup> Fields used to build a unique ID for a single cost record._
+**┬╣** Fields used to build a unique ID for a single cost record.
Note some fields may differ in casing and spacing between account types. Older versions of pay-as-you-go usage files have separate sections for the statement and daily usage.
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 08/01/2022 Last updated : 08/17/2022 # Overview of Microsoft Defender for Containers
You can learn more about [Kubernetes data plane hardening](kubernetes-workload-p
### Scanning images in ACR registries
-Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries. The vulnerability scanner runs on an image:
+Defender for Containers offers vulnerability scanning for images in Azure Container Registries (ACRs). Triggers for scanning an image include:
- - When you push the image to your registry
- - Weekly on any image that was pulled within the last 30
- - When you import the image to your Azure Container Registry
- - Continuously in specific situations
+- **On push**: When an image is pushed in to a registry for storage, Defender for Containers automatically scans the image.
+
+- **Recently pulled**: Weekly scans of images that have been pulled in the last 30 days.
+
+- **On import**: When you import images into an ACR, Defender for Containers scans any supported images.
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
### View vulnerabilities for running images
-The recommendation `Running container images should have vulnerability findings resolved` shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender agent. Images that are deployed from a non-ACR registry, will appear under the Not applicable tab.
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
+
+Defender for Cloud is able to provide the recommendation, by correlating the inventory of your running containers that are collected by the Defender agent which is installed on your AKS clusters, with the vulnerability assessment scan of images that are stored in ACR. The recommendation then shows your running containers with the vulnerabilities associated with the images that are used by each container and provides you with vulnerability reports and remediation steps.
+
+> [!NOTE]
+> **Windows containers**: There is no Defender agent for Windows containers, the Defender agent is deployed to a Linux node running in the cluster, to retrieve the running container inventory for your Windows nodes.
+>
+> Images that aren't pulled from ACR for deployment in AKS won't be checked and will appear under the **Not applicable** tab.
+>
+> Images that have been deleted from their ACR registry, but are still running, won't be reported on only 30 days after their last scan occurred in ACR.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable." lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 08/14/2022 Last updated : 08/17/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in August include:
+- [Vulnerabilities for running images are now visible with Defender for Container on your Windows containers](#vulnerabilities-for-running-images-are-now-visible-with-defender-for-container-on-your-windows-containers)
- [Auto-deployment of Azure Monitor Agent (Preview)](#auto-deployment-of-azure-monitor-agent-preview)
+### Vulnerabilities for running images are now visible with Defender for Container on your Windows containers
+
+Defender for Container now allows you to view vulnerabilities for your running Windows containers.
+
+When vulnerabilities are detected, Defender for Cloud shows the detected issues, and generates the following security recommendation [Running container images should have vulnerability findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
+
+Learn more about [viewing your vulnerabilities for running images](defender-for-containers-introduction.md#view-vulnerabilities-for-running-images).
+ ### Auto-deployment of Azure Monitor Agent (Preview) The [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as Microsoft Sentinel and Microsoft Defender for Cloud.
Updates in July include:
- [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) - [Deprecate API App policies for App Service](#deprecate-api-app-policies-for-app-service)
-### General availability (GA) of the Cloud-native security agent for Kubernetes runtime protection
+### General availability (GA) of the cloud-native security agent for Kubernetes runtime protection
-We're excited to share that the Cloud-native security agent for Kubernetes runtime protection is now generally available (GA)!
+We're excited to share that the cloud-native security agent for Kubernetes runtime protection is now generally available (GA)!
The production deployments of Kubernetes clusters continue to grow as customers continue to containerize their applications. To assist with this growth, the Defender for Containers team has developed a cloud-native Kubernetes oriented security agent.
You can now also group your alerts by resource group to view all of your alerts
Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution&preserve-view=true) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
-Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multi-cloud connectors. For Azure subscriptions with Servers Plan 2 that enabled MDE integration *after* June 20th 2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* June 20th 2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page:
+Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multicloud connectors. For Azure subscriptions with Servers Plan 2 that enabled MDE integration *after* June 20th 2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* June 20th 2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page:
:::image type="content" source="media/integration-defender-for-endpoint/enable-unified-solution.png" alt-text="The integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint, is enabled." lightbox="media/integration-defender-for-endpoint/enable-unified-solution.png":::
All of Microsoft's Defender for IoT device alerts are no longer visible in Micro
### Posture management and threat protection for AWS and GCP released for general availability (GA) -- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multi cloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multicloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.
+- **Defender for Cloud's CSPM features** extend to your AWS and GCP resources. This agentless plan assesses your multicloud resources according to cloud-specific security recommendations that are included in your secure score. The resources are assessed for compliance using the built-in standards. Defender for Cloud's asset inventory page is a multicloud enabled feature that allows you to manage your AWS resources alongside your Azure resources.
- **Microsoft Defender for Servers** brings threat detection and advanced defenses to your compute instances in AWS and GCP. The Defender for Servers plan includes an integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more. Learn about all of the [supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities allow you to easily connect any existing or new compute instances discovered in your environment.
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This article describes an OT sensor deployment on a virtual appliance using Micr
|**Physical specifications** | Virtual Machine | |**Status** | Supported |
+> [!IMPORTANT]
+> Versions 22.2.x of the sensor are incompatible with Hyper-V. Until the issue has been resolved, we recommend using version 22.1.7.
## Prerequisites
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
The Defender for IoT architecture uses on-premises sensors and management server
Fixes and new functionality are applied to each new version and are not applied to older versions. -- **Software update packages include new functionality and security patches**. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter.
+- **Software update packages include new functionality and security patches**. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter.
- **Features available from the Azure portal that are dependent on a specific sensor version** are only available for sensors that have the required version installed, or higher.
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
-| 22.2.4 | 07/2022 | 04/2023 |
-| 22.2.3 | 07/2022 | 04/2023 |
+| 22.2.4 | 07/2022 <br> There's a known compatibility issue with Hyper-V, please use version 22.1.7 | 04/2023 |
+| 22.2.3 | 07/2022 <br> There's a known compatibility issue with Hyper-V, please use version 22.1.7 | 04/2023 |
| 22.1.7 | 07/2022 | 04/2023 | | 22.1.6 | 06/2022 | 10/2022 | | 22.1.5 | 06/2022 | 10/2022 |
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
Follow these steps to manually assign each role:
1. On the Members tab, select **+ Select Members**.
-1. In Select members, search for and select **Cloud PC**, and then select **Select**.
+1. In Select members, search for *Windows 365*, select **Windows 365** from the list, and then select **Select**.
1. On the Members tab, select **Next**.
dev-box Quickstart Configure Dev Box Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-project.md
If you don't have an available dev center with an existing dev box definition an
:::image type="content" source="./media/quickstart-configure-dev-box-projects/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
-4. Select **Dev box pools** and then select **+ Add**.
+4. Select **Dev box pools** and then select **+ Create**.
:::image type="content" source="./media/quickstart-configure-dev-box-projects/dev-box-pool-grid-empty.png" alt-text="Screenshot of the list of dev box pools within a project. The list is empty.":::
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-adopt.md
Microsoft has partnered with domain experts to create DTDL model sets based on i
| Industry | Ontology repository | Description | Learn more | | | | | |
-| Smart buildings | [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) | Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.<br><br>This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. | You can read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794). |
+| Smart buildings | [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) | Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a consortium of real estate owners, software vendors, and research institutions.<br><br>This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. | You can read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794). |
| Smart cities | [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities) | Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). | You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585). | | Energy grids | [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/) | This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market. | You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134). |
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-convert.md
Most [ontologies](concepts-ontologies.md) are based on semantic web standards su
To use a model with Azure Digital Twins, it must be in DTDL format. This article describes general design guidance in the form of a conversion pattern for converting RDF-based models to DTDL so that they can be used with Azure Digital Twins.
-The article also contains [sample converter code](#converter-samples) for RDF and OWL converters, which can be extended for other schemas in the building industry.
+The article also contains [sample converter code](#converter-samples) for RDF and OWL converters, which can be extended for other schemas in the building industry.
+
+Although the examples in this article are building-focused, you can apply similar processes to standard ontologies across different industries to convert them to DTDL as well.
## Conversion pattern
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Check [pricing details](https://azure.microsoft.com/pricing/details/expressroute
### If I pay for an ExpressRoute circuit of a given bandwidth, do I have this bandwidth allocated for ingress and egress traffic separately?
-Yes, the ExpressRoute circuit bandwidth is duplex. For example, if you purchase a 200 mbps ExpressRoute circuit, you are procuring 200 mbps for ingress traffic and 200 mbps for egress traffic.
+Yes, the ExpressRoute circuit bandwidth is duplex. For example, if you purchase a 200 mbps ExpressRoute circuit, you're procuring 200 mbps for ingress traffic and 200 mbps for egress traffic.
### If I pay for an ExpressRoute circuit of a given bandwidth, does the private connection I purchase from my network service provider have to be the same speed?
No. You can purchase a private connection of any speed from your service provide
### If I pay for an ExpressRoute circuit of a given bandwidth, do I have the ability to use more than my procured bandwidth?
-Yes, you may use up to two times the bandwidth limit you procured by using the bandwidth available on the secondary connection of your ExpressRoute circuit. The built-in redundancy of your circuit is configured using primary and secondary connections, each of the procured bandwidth, to two Microsoft Enterprise Edge routers (MSEEs). The bandwidth available through your secondary connection can be used for additional traffic if necessary. Because the secondary connection is meant for redundancy, however, it is not guaranteed and should not be used for additional traffic for a sustained period of time. To learn more about how to use both connections to transmit traffic, see [Use AS PATH prepending](./expressroute-optimize-routing.md#solution-use-as-path-prepending).
+Yes, you may use up to two times the bandwidth limit you procured by using the bandwidth available on the secondary connection of your ExpressRoute circuit. The built-in redundancy of your circuit is configured using primary and secondary connections, each of the procured bandwidth, to two Microsoft Enterprise Edge routers (MSEEs). The bandwidth available through your secondary connection can be used for more traffic if necessary. Because the secondary connection is meant for redundancy, however, it isn't guaranteed and shouldn't be used for additional traffic for a sustained period of time. To learn more about how to use both connections to transmit traffic, see [Use AS PATH prepending](./expressroute-optimize-routing.md#solution-use-as-path-prepending).
-If you plan to use only your primary connection to transmit traffic, the bandwidth for the connection is fixed and attempting to oversubscribe it will result in increased packet drops. If traffic flows through an ExpressRoute Gateway, the bandwidth for the Gateway SKU is fixed and not burstable. For the bandwidth of each Gateway SKU, see [About ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md#aggthroughput).
+If you plan to use only your primary connection to transmit traffic, the bandwidth for the connection is fixed, and attempting to oversubscribe it will result in increased packet drops. If traffic flows through an ExpressRoute Gateway, the bandwidth for the Gateway SKU is fixed and not burstable. For the bandwidth of each Gateway SKU, see [About ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md#aggthroughput).
### Can I use the same private network connection with virtual network and other Azure services simultaneously?
Yes. An ExpressRoute circuit, once set up, allows you to access services within
### How are VNets advertised on ExpressRoute Private Peering?
-The ExpressRoute gateway will advertise the *Address Space(s)* of the Azure VNet, you can't include/exclude at the subnet level. It is always the VNet Address Space that is advertised. Also, if VNet Peering is used and the peered VNet has "Use Remote Gateway" enabled, the Address Space of the peered VNet will also be advertised.
+The ExpressRoute gateway will advertise the *Address Space(s)* of the Azure VNet, you can't include/exclude at the subnet level. It's always the VNet Address Space that is advertised. Also, if VNet Peering is used and the peered VNet has "Use Remote Gateway" enabled, the Address Space of the peered VNet will also be advertised.
### How many prefixes can be advertised from a VNet to on-premises on ExpressRoute Private Peering?
-There is a maximum of 1000 IPv4 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 999 address spaces on a single VNet connected to an ExpressRoute circuit, all 999 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 500 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 501 prefixes to on-premises.
+There's a maximum of 1000 IPv4 prefixes advertised on a single ExpressRoute connection, or through VNet peering using gateway transit. For example, if you have 999 address spaces on a single VNet connected to an ExpressRoute circuit, all 999 of those prefixes will be advertised to on-premises. Alternatively, if you have a VNet enabled to allow gateway transit with 1 address space and 500 spoke VNets enabled using the "Allow Remote Gateway" option, the VNet deployed with the gateway will advertise 501 prefixes to on-premises.
-If you are using a dual-stack circuit, there is a maximum of 100 IPv6 prefixes on a single ExpressRoute connection, or through VNet peering using gateway transit. This in addition to the limits described above.
+If you're using a dual-stack circuit, there's a maximum of 100 IPv6 prefixes on a single ExpressRoute connection, or through VNet peering using gateway transit. This in addition to the limits described above.
### What happens if I exceed the prefix limit on an ExpressRoute connection?
ExpressRoute supports [three routing domains](expressroute-circuit-peerings.md)
### Microsoft peering
-If your ExpressRoute circuit is enabled for Azure Microsoft peering, you can access the [public IP address ranges](../virtual-network/ip-services/public-ip-addresses.md#public-ip-addresses) used in Azure over the circuit. Azure Microsoft peering will provide access to services currently hosted on Azure (with geo-restrictions depending on your circuit's SKU). To validate availability for a specific service, you can check the documentation for that service to see if there is a reserved range published for that service. Then, look up the IP ranges of the target service and compare with the ranges listed in the [Azure IP Ranges and Service Tags ΓÇô Public Cloud XML file](https://www.microsoft.com/download/details.aspx?id=56519). Alternatively, you can open a support ticket for the service in question for clarification.
+If your ExpressRoute circuit is enabled for Azure Microsoft peering, you can access the [public IP address ranges](../virtual-network/ip-services/public-ip-addresses.md#public-ip-addresses) used in Azure over the circuit. Azure Microsoft peering will provide access to services currently hosted on Azure (with geo-restrictions depending on your circuit's SKU). To validate availability for a specific service, you can check the documentation for that service to see if there's a reserved range published for that service. Then, look up the IP ranges of the target service and compare with the ranges listed in the [Azure IP Ranges and Service Tags ΓÇô Public Cloud XML file](https://www.microsoft.com/download/details.aspx?id=56519). Alternatively, you can open a support ticket for the service in question for clarification.
**Supported:**
For more information and configuration steps for public peering, see [ExpressRou
### Why I see 'Advertised public prefixes' status as 'Validation needed', while configuring Microsoft peering?
-Microsoft verifies if the specified 'Advertised public prefixes' and 'Peer ASN' (or 'Customer ASN') are assigned to you in the Internet Routing Registry. If you are getting the public prefixes from another entity and if the assignment is not recorded with the routing registry, the automatic validation will not complete and will require manual validation. If the automatic validation fails, you will see the message 'Validation needed'.
+Microsoft verifies if the specified 'Advertised public prefixes' and 'Peer ASN' (or 'Customer ASN') are assigned to you in the Internet Routing Registry. If you're getting the public prefixes from another entity and if the assignment isn't recorded with the routing registry, the automatic validation won't complete, and will require manual validation. If the automatic validation fails, you'll see the message 'Validation needed'.
If you see the message 'Validation needed', collect the document(s) that show the public prefixes are assigned to your organization by the entity that is listed as the owner of the prefixes in the routing registry and submit these documents for manual validation by opening a support ticket as shown below.
Dynamics 365 and Common Data Service (CDS) environments are hosted on Azure and
### Are there limits on the amount of data that I can transfer using ExpressRoute?
-We do not set a limit on the amount of data transfer. Refer to [pricing details](https://azure.microsoft.com/pricing/details/expressroute/) for information on bandwidth rates.
+We don't set a limit on the amount of data transfer. Refer to [pricing details](https://azure.microsoft.com/pricing/details/expressroute/) for information on bandwidth rates.
### What connection speeds are supported by ExpressRoute?
Yes. Each ExpressRoute circuit has a redundant pair of cross connections configu
### Will I lose connectivity if one of my ExpressRoute links fail?
-You will not lose connectivity if one of the cross connections fails. A redundant connection is available to support the load of your network and provide high availability of your ExpressRoute circuit. You can additionally create a circuit in a different peering location to achieve circuit-level resilience.
+You won't lose connectivity if one of the cross connections fails. A redundant connection is available to support the load of your network and provide high availability of your ExpressRoute circuit. You can additionally create a circuit in a different peering location to achieve circuit-level resilience.
### How do I implement redundancy on private peering?
-Multiple ExpressRoute circuits from different peering locations or up to four connections from the same peering location can be connected to the same virtual network to provide high-availability in the case that a single circuit becomes unavailable. You can then [assign higher weights](./expressroute-optimize-routing.md#solution-assign-a-high-weight-to-local-connection) to one of the local connections to prefer a specific circuit. It is strongly recommended that customers setup at least two ExpressRoute circuits to avoid single points of failure.
+Multiple ExpressRoute circuits from different peering locations or up to four connections from the same peering location can be connected to the same virtual network to provide high-availability in the case that a single circuit becomes unavailable. You can then [assign higher weights](./expressroute-optimize-routing.md#solution-assign-a-high-weight-to-local-connection) to one of the local connections to prefer a specific circuit. It's strongly recommended that customers setup at least two ExpressRoute circuits to avoid single points of failure.
See [here](./designing-for-high-availability-with-expressroute.md) for designing for high availability and [here](./designing-for-disaster-recovery-with-expressroute-privatepeering.md) for designing for disaster recovery. ### How I do implement redundancy on Microsoft peering?
-It is highly recommended when customers are using Microsoft peering to access Azure public services like Azure Storage or Azure SQL, as well as customers that are using Microsoft peering for Microsoft 365 that they implement multiple circuits in different peering locations to avoid single points of failure. Customers can either advertise the same prefix on both circuits and use [AS PATH prepending](./expressroute-optimize-routing.md#solution-use-as-path-prepending) or advertise different prefixes to determine path from on-premises.
+It's highly recommended when customers are using Microsoft peering to access Azure public services like Azure Storage or Azure SQL, and customers that are using Microsoft peering for Microsoft 365 that they implement multiple circuits in different peering locations to avoid single points of failure. Customers can either advertise the same prefix on both circuits and use [AS PATH prepending](./expressroute-optimize-routing.md#solution-use-as-path-prepending) or advertise different prefixes to determine path from on-premises.
See [here](./designing-for-high-availability-with-expressroute.md) for designing for high availability.
You can achieve high availability by connecting up to 4 ExpressRoute circuits in
You must implement the *Local Preference* attribute on your router(s) to ensure that the path from on-premises to Azure is always preferred on your ExpressRoute circuit(s).
-See additional details [here](./expressroute-optimize-routing.md#path-selection-on-microsoft-and-public-peerings) on BGP path selection and common router configurations.
+See more details [here](./expressroute-optimize-routing.md#path-selection-on-microsoft-and-public-peerings) on BGP path selection and common router configurations.
### <a name="onep2plink"></a>If I'm not co-located at a cloud exchange and my service provider offers point-to-point connection, do I need to order two physical connections between my on-premises network and Microsoft?
If your service provider can establish two Ethernet virtual circuits over the ph
### Can I extend one of my VLANs to Azure using ExpressRoute?
-No. We do not support layer 2 connectivity extensions into Azure.
+No. We don't support layer 2 connectivity extensions into Azure.
-### Can I have more than one ExpressRoute circuit in my subscription?
+### Can I've more than one ExpressRoute circuit in my subscription?
Yes. You can have more than one ExpressRoute circuit in your subscription. The default limit is set to 50. You can contact Microsoft Support to increase the limit, if needed.
Yes. You can have up to 10 virtual networks connections on a standard ExpressRou
### I have multiple Azure subscriptions that contain virtual networks. Can I connect virtual networks that are in separate subscriptions to a single ExpressRoute circuit?
-Yes. You can link up to 10 virtual networks in the same subscription as the circuit or different subscriptions using a single ExpressRoute circuit. This limit can be increased by enabling the ExpressRoute premium feature. Note that connectivity and bandwidth charges for the dedicated circuit will be applied to the ExpressRoute circuit owner; all virtual networks share the same bandwidth.
+Yes. You can link up to 10 virtual networks in the same subscription as the circuit or different subscriptions using a single ExpressRoute circuit. This limit can be increased by enabling the ExpressRoute premium feature. Connectivity and bandwidth charges for the dedicated circuit will be applied to the ExpressRoute circuit owner; all virtual networks share the same bandwidth.
For more information, see [Sharing an ExpressRoute circuit across multiple subscriptions](expressroute-howto-linkvnet-arm.md). ### I have multiple Azure subscriptions associated to different Azure Active Directory tenants or Enterprise Agreement enrollments. Can I connect virtual networks that are in separate tenants and enrollments to a single ExpressRoute circuit not in the same tenant or enrollment?
-Yes. ExpressRoute authorizations can span subscription, tenant, and enrollment boundaries with no additional configuration required. Note that connectivity and bandwidth charges for the dedicated circuit will be applied to the ExpressRoute circuit owner; all virtual networks share the same bandwidth.
+Yes. ExpressRoute authorizations can span subscription, tenant, and enrollment boundaries with no extra configuration required. Connectivity and bandwidth charges for the dedicated circuit will be applied to the ExpressRoute circuit owner; all virtual networks share the same bandwidth.
For more information, see [Sharing an ExpressRoute circuit across multiple subscriptions](expressroute-howto-linkvnet-arm.md). ### Are virtual networks connected to the same circuit isolated from each other?
-No. From a routing perspective, all virtual networks linked to the same ExpressRoute circuit are part of the same routing domain and are not isolated from each other. If you need route isolation, you need to create a separate ExpressRoute circuit.
+No. From a routing perspective, all virtual networks linked to the same ExpressRoute circuit are part of the same routing domain and aren't isolated from each other. If you need route isolation, you need to create a separate ExpressRoute circuit.
-### Can I have one virtual network connected to more than one ExpressRoute circuit?
+### Can I've one virtual network connected to more than one ExpressRoute circuit?
Yes. You can link a single virtual network with up to four ExpressRoute circuits in the same location or up to 16 ExpressRoute circuits in different peering locations. ### Can I access the Internet from my virtual networks connected to ExpressRoute circuits?
-Yes. If you have not advertised default routes (0.0.0.0/0) or Internet route prefixes through the BGP session, you can connect to the Internet from a virtual network linked to an ExpressRoute circuit.
+Yes. If you haven't advertised default routes (0.0.0.0/0) or Internet route prefixes through the BGP session, you can connect to the Internet from a virtual network linked to an ExpressRoute circuit.
### Can I block Internet connectivity to virtual networks connected to ExpressRoute circuits?
Yes. You can advertise default routes (0.0.0.0/0) to block all Internet connecti
> [!NOTE] > If the advertised route of 0.0.0.0/0 is withdrawn from the routes advertised (for example, due to an outage or misconfiguration), Azure will provide a [system route](../virtual-network/virtual-networks-udr-overview.md#system-routes) to resources on the connected Virtual Network to provide connectivity to the internet. To ensure egress traffic to the internet is blocked, it is recommended to place a Network Security Group on all subnets with an Outbound Deny rule for internet traffic.
-If you advertise default routes, we force traffic to services offered over Microsoft peering (such as Azure storage and SQL DB) back to your premises. You will have to configure your routers to return traffic to Azure through the Microsoft peering path or over the Internet. If you've enabled a service endpoint for the service, the traffic to the service is not forced to your premises. The traffic remains within the Azure backbone network. To learn more about service endpoints, see [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md?toc=%2fazure%2fexpressroute%2ftoc.json)
+If you advertise default routes, we force traffic to services offered over Microsoft peering (such as Azure storage and SQL DB) back to your premises. You'll have to configure your routers to return traffic to Azure through the Microsoft peering path or over the Internet. If you've enabled a service endpoint for the service, the traffic to the service isn't forced to your premises. The traffic remains within the Azure backbone network. To learn more about service endpoints, see [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md?toc=%2fazure%2fexpressroute%2ftoc.json)
### Can virtual networks linked to the same ExpressRoute circuit talk to each other? Yes. Virtual machines deployed in virtual networks connected to the same ExpressRoute circuit can communicate with each other. We recommend setting up [virtual network peering](../virtual-network/virtual-network-peering-overview.md) to facilitate this communication.
-### Can I set up a site-to-site VPN connection to my virtual network in conjunction with ExpressRoute?
+### Can I set up a site-to-site VPN connection to my virtual network along with ExpressRoute?
Yes. ExpressRoute can coexist with site-to-site VPNs. See [Configure ExpressRoute and site-to-site coexisting connections](expressroute-howto-coexist-resource-manager.md).
If you want to enable routing between your branch connected to ExpressRoute and
### Why is there a public IP address associated with the ExpressRoute gateway on a virtual network?
-The public IP address is used for internal management only, and does not constitute a security exposure of your virtual network.
+The public IP address is used for internal management only, and doesn't constitute a security exposure of your virtual network.
### Are there limits on the number of routes I can advertise?
Yes. We accept up to 4000 route prefixes for private peering and 200 for Microso
### Are there restrictions on IP ranges I can advertise over the BGP session?
-We do not accept private prefixes (RFC1918) for the Microsoft peering BGP session. We accept any prefix size (up to /32) on both the Microsoft and the private peering.
+We don't accept private prefixes (RFC1918) for the Microsoft peering BGP session. We accept any prefix size (up to /32) on both the Microsoft and the private peering.
### What happens if I exceed the BGP limits?
-BGP sessions will be dropped. They will be reset once the prefix count goes below the limit.
+BGP sessions will be dropped. They'll be reset once the prefix count goes below the limit.
### What is the ExpressRoute BGP hold time? Can it be adjusted?
-The hold time is 180. The keep-alive messages are sent every 60 seconds. These are fixed settings on the Microsoft side that cannot be changed. It is possible for you to configure different timers, and the BGP session parameters will be negotiated accordingly.
+The hold time is 180. The keep-alive messages are sent every 60 seconds. These are fixed settings on the Microsoft side that can't be changed. It's possible for you to configure different timers, and the BGP session parameters will be negotiated accordingly.
### Can I change the bandwidth of an ExpressRoute circuit?
-Yes, you can attempt to increase the bandwidth of your ExpressRoute circuit in the Azure portal, or by using PowerShell. If there is capacity available on the physical port on which your circuit was created, your change succeeds.
+Yes, you can attempt to increase the bandwidth of your ExpressRoute circuit in the Azure portal, or by using PowerShell. If there's capacity available on the physical port on which your circuit was created, your change succeeds.
-If your change fails, it means either there isn't enough capacity left on the current port and you need to create a new ExpressRoute circuit with the higher bandwidth, or that there is no additional capacity at that location, in which case you won't be able to increase the bandwidth.
+If your change fails, it means either there isn't enough capacity left on the current port and you need to create a new ExpressRoute circuit with the higher bandwidth, or that there's no other capacity at that location, in which case you won't be able to increase the bandwidth.
-You will also have to follow up with your connectivity provider to ensure that they update the throttles within their networks to support the bandwidth increase. You cannot, however, reduce the bandwidth of your ExpressRoute circuit. You have to create a new ExpressRoute circuit with lower bandwidth and delete the old circuit.
+You'll also have to follow up with your connectivity provider to ensure that they update the throttles within their networks to support the bandwidth increase. You can't, however, reduce the bandwidth of your ExpressRoute circuit. You have to create a new ExpressRoute circuit with lower bandwidth and delete the old circuit.
### How do I change the bandwidth of an ExpressRoute circuit?
You can update the bandwidth of the ExpressRoute circuit using the Azure portal,
### I received a notification about maintenance on my ExpressRoute circuit. What is the technical impact of this maintenance?
-You should experience minimal to no impact during maintenance if you operate your circuit in [active-active mode](./designing-for-high-availability-with-expressroute.md#active-active-connections). We perform maintenance on the primary and secondary connections of your circuit separately. During maintenance you may see longer AS-path prepend over one of the connections. The reason is to gracefully shift traffic from one connection to another. You must not ignore longer AS path, as it can cause asymmetric routing, resulting in a service outage. It is advisable to configure [BFD](expressroute-bfd.md) for faster BGP failover between Primary and Secondary connection in the event a BGP failure is detected during maintenance. Scheduled maintenance will usually be performed outside of business hours in the time zone of the peering location, and you cannot select a maintenance time.
+You should experience minimal to no impact during maintenance if you operate your circuit in [active-active mode](./designing-for-high-availability-with-expressroute.md#active-active-connections). We perform maintenance on the primary and secondary connections of your circuit separately. During maintenance you may see longer AS-path prepend over one of the connections. The reason is to gracefully shift traffic from one connection to another. You must not ignore longer AS path, as it can cause asymmetric routing, resulting in a service outage. It's advisable to configure [BFD](expressroute-bfd.md) for faster BGP failover between Primary and Secondary connection in the event a BGP failure is detected during maintenance. Scheduled maintenance will usually be performed outside of business hours in the time zone of the peering location, and you can't select a maintenance time.
### I received a notification about a software upgrade or maintenance on my ExpressRoute gateway. What is the technical impact of this maintenance?
-You should experience minimal to no impact during a software upgrade or maintenance on your gateway. The ExpressRoute gateway is composed of multiple instances and during upgrades, instances are taken offline one at a time. While this may cause your gateway to temporarily support lower network throughput to the virtual network, the gateway itself will not experience any downtime.
+You should experience minimal to no impact during a software upgrade or maintenance on your gateway. The ExpressRoute gateway is composed of multiple instances and during upgrades, instances are taken offline one at a time. While this may cause your gateway to temporarily support lower network throughput to the virtual network, the gateway itself won't experience any downtime.
## ExpressRoute SKU scope access
ExpressRoute premium features can be enabled when the feature is enabled, and ca
### How do I disable ExpressRoute premium?
-You can disable ExpressRoute premium by calling the REST API or PowerShell cmdlet. You must make sure that you have scaled your connectivity needs to meet the default limits before you disable ExpressRoute premium. If your utilization scales beyond the default limits, the request to disable ExpressRoute premium fails.
+You can disable ExpressRoute premium by calling the REST API or PowerShell cmdlet. You must make sure that you've scaled your connectivity needs to meet the default limits before you disable ExpressRoute premium. If your utilization scales beyond the default limits, the request to disable ExpressRoute premium fails.
### Can I pick and choose the features I want from the premium feature set?
ExpressRoute Local may not be available for a ExpressRoute Location. For peering
While you need to pay egress data transfer for your Standard or Premium ExpressRoute circuit, you don't pay egress data transfer separately for your ExpressRoute Local circuit. In other words, the price of ExpressRoute Local includes data transfer fees. ExpressRoute Local is a more economical solution if you have massive amount of data to transfer and you can bring your data over a private connection to an ExpressRoute peering location near your desired Azure regions.
-### What features are available and what are not on ExpressRoute Local?
+### What features are available and what aren't on ExpressRoute Local?
Compared to a Standard ExpressRoute circuit, a Local circuit has the same set of features except: * Scope of access to Azure regions as described above
-* ExpressRoute Global Reach is not available on Local
+* ExpressRoute Global Reach isn't available on Local
-ExpressRoute Local also has the same limits on resources (e.g. the number of VNets per circuit) as Standard.
+ExpressRoute Local also has the same limits on resources (for example, the number of VNets per circuits) as Standard.
### Where is ExpressRoute Local available and which Azure regions is each peering location mapped to?
-ExpressRoute Local is available at the peering locations where one or two Azure regions are close-by. It is not available at a peering location where there is no Azure region in that state or province or country/region. Please see the exact mappings on [the Locations page](expressroute-locations-providers.md).
+ExpressRoute Local is available at the peering locations where one or two Azure regions are close-by. It isn't available at a peering location where there's no Azure region in that state or province or country/region. See the exact mappings on [the Locations page](expressroute-locations-providers.md).
## ExpressRoute for Microsoft 365
See [ExpressRoute partners and locations](expressroute-locations.md) for informa
### Can I access Microsoft 365 over the Internet, even if ExpressRoute was configured for my organization?
-Yes. Microsoft 365 service endpoints are reachable through the Internet, even though ExpressRoute has been configured for your network. Please check with your organization's networking team if the network at your location is configured to connect to Microsoft 365 services through ExpressRoute.
+Yes. Microsoft 365 service endpoints are reachable through the Internet, even though ExpressRoute has been configured for your network. Check with your organization's networking team if the network at your location is configured to connect to Microsoft 365 services through ExpressRoute.
### How can I plan for high availability for Microsoft 365 network traffic on Azure ExpressRoute? See the recommendation for [High availability and failover with Azure ExpressRoute](/microsoft-365/enterprise/network-planning-with-expressroute)
Yes. Office 365 GCC service endpoints are reachable through the Azure US Governm
## Route filters for Microsoft peering
-### I am turning on Microsoft peering for the first time, what routes will I see?
+### I'm turning on Microsoft peering for the first time, what routes will I see?
-You will not see any routes. You have to attach a route filter to your circuit to start prefix advertisements. For instructions, see [Configure route filters for Microsoft peering](how-to-routefilter-powershell.md).
+You won't see any routes. You have to attach a route filter to your circuit to start prefix advertisements. For instructions, see [Configure route filters for Microsoft peering](how-to-routefilter-powershell.md).
-### I turned on Microsoft peering and now I am trying to select Exchange Online, but it is giving me an error that I am not authorized to do it.
+### I turned on Microsoft peering and now I'm trying to select Exchange Online, but it's giving me an error that I'm not authorized to do it.
-When using route filters, any customer can turn on Microsoft peering. However, for consuming Microsoft 365 services, you still need to get authorized by Microsoft 365.
+When you're using route filters, anyone can turn on Microsoft peering. However, for consuming Microsoft 365 services, you still need to get authorized by Microsoft 365.
### I enabled Microsoft peering prior to August 1, 2017, how can I take advantage of route filters? Your existing circuit will continue advertising the prefixes for Microsoft 365. If you want to add Azure public prefixes advertisements over the same Microsoft peering, you can create a route filter, select the services you need advertised (including the Microsoft 365 service(s) you need), and attach the filter to your Microsoft peering. For instructions, see [Configure route filters for Microsoft peering](how-to-routefilter-powershell.md).
-### I have Microsoft peering at one location, now I am trying to enable it at another location and I am not seeing any prefixes.
+### I have Microsoft peering at one location, now I'm trying to enable it at another location and I'm not seeing any prefixes.
-* Microsoft peering of ExpressRoute circuits that were configured prior to August 1, 2017 will have all service prefixes advertised through Microsoft peering, even if route filters are not defined.
+* Microsoft peering of ExpressRoute circuits that were configured prior to August 1, 2017 will have all service prefixes advertised through Microsoft peering, even if route filters aren't defined.
-* Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 will not have any prefixes advertised until a route filter is attached to the circuit. You will see no prefixes by default.
+* Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 won't have any prefixes advertised until a route filter is attached to the circuit. You'll see no prefixes by default.
### If I have multiple Virtual Networks (Vnets) connected to the same ExpressRoute circuit, can I use ExpressRoute for Vnet-to-Vnet connectivity?
-Vnet-to-Vnet connectivity over ExpressRoute is not recommended. To acheive this, configure [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md?msclkid=b64a7b6ac19e11eca60d5e3e5d0764f5).
+Vnet-to-Vnet connectivity over ExpressRoute isn't recommended. To achieve this, configure [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md?msclkid=b64a7b6ac19e11eca60d5e3e5d0764f5).
## <a name="expressRouteDirect"></a>ExpressRoute Direct
Vnet-to-Vnet connectivity over ExpressRoute is not recommended. To acheive this,
[!INCLUDE [Global Reach](../../includes/expressroute-global-reach-faq-include.md)]
+## ExpressRoute Traffic Collector
+
+### Where does ExpressRoute Traffic Collector store your data?
+
+All flow logs are ingested into your Log Analytics workspace by the ExpressRoute Traffic Collector. ExpressRoute Traffic Collector itself, doesn't store any of your data.
+
+### What is the sampling rate used by ExpressRoute Traffic Collector?
+
+ExpressRoute Traffic Collector uses a sampling rate of 1:4096, which means 1 out of 4096 packets are captured.
+
+### How many flows can ExpressRoute Traffic Collector handle?
+
+ExpressRoute Traffic Collector can handle up to 30,000 flows a minute. In the event this limit is reached, excess flows will be dropped. For more information, see [count of flows metric](expressroute-monitoring-metrics-alerts.md#count-of-flow-records-processedsplit-by-instances-or-expressroute-circuit) on a circuit.
+
+### Does ExpressRoute Traffic Collector support Virtual WAN?
+
+Yes, you can use Express Traffic Collector with ExpressRoute Direct circuits used in a Virtual WAN deployment. However, deploying ExpressRoute Traffic Collector within a Virtual WAN hub isnΓÇÖt supported. You can deploy ExpressRoute Traffic collector in a spoke virtual network and ingest flow logs to a Log Analytics workspace.
+
+### What is the impact of maintenance on flow logging?
+
+You should experience minimal to no impact during maintenance on your ExpressRoute Traffic Collector. ExpressRoute Traffic Collector has multiple instances on different update domains, during an upgrade, instances are taken offline one at a time. While you may experience lower ingestion of sample flows into the Log Analytics workspace, the ExpressRoute Traffic Collector itself won't experience any downtime. Loss of sampled flows during maintenance shouldn't impact network traffic analysis, when sampled data is aggregated over a longer time frame.
+
+### Does ExpressRoute Traffic Collector support availability zones?
+
+ExpressRoute Traffic Collector deployment by default has availability zones enabled in the regions where it's available. For information about region availability, see [Availability zones supported regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
++
+### How should I incorporate ExpressRoute Traffic Collector in my disaster recovery plan?
+
+You can associate a single ExpressRoute Direct circuit with multiple ExpressRoute Traffic Collectors deployed in different Azure region within a given geo-political region. It's recommended that you associate your ExpressRoute Direct circuit with multiple ExpressRoute Traffic Collectors as part of your disaster recovery and high availability plan.
+ ## Privacy ### Does the ExpressRoute service store customer data?
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | No | | [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | No |
+### ExpressRoute Traffic Collector
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
+| | | | | | | |
+| CPU utilization | Performance | Count | Average | CPU Utilization of the ExpressRoute Traffic Collector | roleInstance | Yes |
+| Memory Utilization | Performance | CountPerSecond | Average | Memory Utilization of the ExpressRoute Traffic Collector | roleInstance | Yes |
+| Count of flow records processed | Availability | Count | Maximum | Count of number of flow records processed or ingested | roleInstance, ExpressRoute Circuit | Yes |
+ ## Circuits metrics ### <a name = "circuitbandwidth"></a>Bits In and Out - Metrics across all peerings
This metric shows the number of routes the ExpressRoute gateway is advertising t
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-advertised-to-peer.png" alt-text="Screenshot of count of routes advertised to peer.":::
-### <a name = "learnedroutes"></a>Count of Routes Learned from Peer - Split by instance
+### <a name = "learnedroutes"></a>Count of routes learned from peer - Split by instance
Aggregation type: *Max*
This metric shows the number of routes the ExpressRoute gateway is learning from
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-learned-from-peer.png" alt-text="Screenshot of count of routes learned from peer.":::
-### <a name = "frequency"></a>Frequency of Routes change - Split by instance
+### <a name = "frequency"></a>Frequency of routes change - Split by instance
Aggregation type: *Sum*
This metric shows the frequency of routes being learned from or advertised to re
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/frequency-of-routes-changed.png" alt-text="Screenshot of frequency of routes changed metric.":::
-### <a name = "vm"></a>Number of VMs in the Virtual Network
+### <a name = "vm"></a>Number of VMs in the virtual network
Aggregation type: *Max*
This metric shows the bits per second for ingress and egress to Azure through th
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erconnections.jpg" alt-text="Screenshot of gateway connection bandwidth usage metric.":::
+## ExpressRoute Traffic Collector metrics
+
+### CPU Utilization - Split by instance
+
+Aggregation type:ΓÇ»*Avg* (of percentage of total utilized CPU)
+
+*Granularity: 5 min*
+
+You can view the CPU utilization of each ExpressRoute Traffic Collector instance. The CPU utilization may spike briefly during routine host maintenance, but prolonged high CPU utilization could indicate your ExpressRoute Traffic Collector is reaching a performance bottleneck.
+
+**Guidance:** Set an alert for when avg CPU utilization exceeds a certain threshold.
++
+### Memory Utilization - Split by instance
+
+Aggregation type:ΓÇ»*Avg* (of percentage of total utilized Memory)
+
+*Granularity: 5 min*
+
+You can view the memory utilization of each ExpressRoute Traffic Collector instance. Memory utilization may spike briefly during routine host maintenance, but prolonged high memory utilization could indicate your Azure Traffic Collector is reaching a performance bottleneck.
+
+**Guidance:** Set an alert for when avg memory utilization exceeds a certain threshold.
++
+### Count of flow records processed - Split by instances or ExpressRoute circuit
+
+Aggregation type: *Count*
+
+*Granularity: 5 min*
+
+You can view the count of number of flow records processed by ExpressRoute Traffic Collector, aggregated across ExpressRoute Circuits. Customer can split the metrics across each ExpressRoute Traffic Collector instance or ExpressRoute circuit when multiple circuits are associated to the ExpressRoute Traffic Collector. Monitoring this metric will help you understand if you need to deploy more ExpressRoute Traffic Collector instances or migrate ExpressRoute circuit association from one ExpressRoute Traffic Collector deployment to another.
+
+**Guidance:** Splitting by circuits is recommended when multiple ExpressRoute circuits are associated with an ExpressRoute Traffic Collector deployment. This will help determine the flow count of each ExpressRoute circuit and ExpressRoute Traffic Collector utilization by each ExpressRoute circuit.
++ ## Alerts for ExpressRoute gateway connections 1. To set up alerts, go to **Azure Monitor**, then select **Alerts**.
expressroute How To Configure Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-traffic-collector.md
+
+ Title: 'Configure ExpressRoute Traffic Collector for ExpressRoute Direct using the Azure portal (Preview)'
+description: Learn how to create an ExpressRoute Traffic Collector resource to import logs into a Log Analytics workspace.
++++ Last updated : 07/15/2022+++
+# Configure ExpressRoute Traffic Collector for ExpressRoute Direct using the Azure portal (Preview)
+
+This article will help you deploy an ExpressRoute Traffic Collector using the Azure portal. You'll learn how to add and remove an ExpressRoute Traffic Collector, associate it to an ExpressRoute Direct circuit and Log Analytics workspace. Once the ExpressRoute Traffic Collector is deployed, sampled flow logs will get imported into a Log Analytics workspace. For more information, see [About ExpressRoute Traffic Collector](traffic-collector.md).
+
+> [!IMPORTANT]
+> ExpressRoute Traffic Collector is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Limitations
+
+- ExpressRoute Traffic Collector supports a maximum ExpressRoute Direct circuit size of 100 Gbps.
+- You can associate up to 20 ExpressRoute Direct circuits with ExpressRoute Traffic Collector as long as the total circuit bandwidth doesn't exceed 100 Gbps.
+
+## Prerequisites
+
+- ExpressRoute Direct circuit with Private or Microsoft peering configured.
+- A Log Analytics workspace (Create new or use existing).
+
+> [!NOTE]
+> - The ExpressRoute Direct circuit, ExpressRoute Traffic Collector and the Log Analytics workspace must be in the same geo-political region. Cross geo-political resource association is not supported.
+> - The ExpressRoute Direct circuit and ExpressRoute Traffic Collector must be deployed in the same subscription. Cross subscription deployment is currently not available.
+> - Log Analytics and ExpressRoute Traffic Collector can be deployed in cross subscription.
+> - When ExpressRoute Traffic Collector gets deployed in an Azure region that supports availability zones, it will have availability zone enabled by default.
+
+## Permissions
+
+- Minimum contributor access is required to deploy ExpressRoute Traffic Collector.
+- Minimum contributor access is required to associate ExpressRoute Direct circuit with ExpressRoute Traffic Collector.
+- Monitor contributor role is required to associate Log Analytics workspace with ExpressRoute Traffic Collector.
+
+For more information, see [Identity and access management](../active-directory/fundamentals/active-directory-ops-guide-iam.md).
+
+## Deploy ExpressRoute Traffic Collector
+
+1. Sign in to the [Azure portal](https://portal.azure.com/)
+
+1. In the portal, go to the list of ExpressRoute circuits and select **ExpressRoute Traffic Collectors**. Then select **+ Create new**.
+
+ :::image type="content" source="./media/how-to-configure-traffic-collector/circuit-list.png" alt-text="Screenshot of the create new ExpressRoute Traffic Collector button from the ExpressRoute circuit list page.":::
+
+1. On the **Create an ExpressRoute Traffic Collector** page, enter or select the following information then select **Next**.
+
+ :::image type="content" source="./media/how-to-configure-traffic-collector/basics.png" alt-text="Screenshot of the basics page for create an ExpressRoute Traffic Collector.":::
+
+ | Setting | Description |
+ | | |
+ | Subscription | Select the subscription to create the ExpressRoute Traffic Collector resource. This resource needs to be in the same subscription as the ExpressRoute Direct circuit. |
+ | Resource group | Select the resource group to deploy this resource into. |
+ | Name | Enter a name to identify this ExpressRoute Traffic Collector resource. |
+ | Region | Select a region to deploy this resource into. This resource needs to be in the same geo-political region as the Log Analytics workspace and the ExpressRoute Direct circuits. |
+ | Collector Policy | This value is automatically filled in as **Default**. |
+
+1. On the **Select ExpressRoute circuit** tab, select **+ Add ExpressRoute Circuits**. Select the checkbox next to the circuit you would like to add to the Traffic Collector and then select **Add**. Once you're satisfied with the circuits added, select **Next**.
+
+ :::image type="content" source="./media/how-to-configure-traffic-collector/select-circuits.png" alt-text="Screenshot of the select ExpressRoute circuits tab and add circuits page.":::
+
+1. On the **Forward Logs** tab, select the checkbox for **Send to Log Analytics workspace**. You can create a new Log Analytics workspace or choose an existing. The workspace can be in a different Azure subscription but has to be in the same geo-political region. Select **Next** once a workspace has been chosen.
+
+ :::image type="content" source="./media/how-to-configure-traffic-collector/forward-logs.png" alt-text="Screenshot of the forward logs tab to Logs Analytics workspace.":::
+
+1. On the **Tags** tab, you can add optional tags for tracking purpose. Select **Next** to review your configuration.
+
+1. Select **Create** once validation has passed to deploy your ExpressRoute Traffic Collector.
+
+ :::image type="content" source="./media/how-to-configure-traffic-collector/validation.png" alt-text="Screenshot of the create validation page.":::
+
+1. Once deployed you should start seeing sampled flow logs within the configure Log Analytics workspace.
+
+ :::image type="content" source="./media/how-to-configure-traffic-collector/log-analytics.png" alt-text="Screenshot of logs in Log Analytics workspace." lightbox="./media/how-to-configure-traffic-collector/log-analytics.png":::
+
+## Clean up resources
+
+To delete the ExpressRoute Traffic Collector resource, you first need to remove all ExpressRoute circuit associations.
+
+> [!IMPORTANT]
+> If you delete the ExpressRoute Traffic Collector resource before removing all circuit associations, you'll need to wait about 40 mins for the deletion to timeout before you can try again.
+>
+
+Once all circuits have been removed from the ExpressRoute Traffic Collector, select **Delete** from the overview page to remove the resource from your subscription.
++
+## Next steps
+
+- [ExpressRoute Traffic Collector Metrics](expressroute-monitoring-metrics-alerts.md#expressroute-traffic-collector-metrics)
expressroute Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md
+
+ Title: Enable flow logging using Azure ExpressRoute Traffic Collector (Preview)
+description: Learn about ExpressRoute Traffic Collector and the different use cases where this feature will be helpful.
++++ Last updated : 08/02/2022++++
+# Enable flow logging using ExpressRoute Traffic Collector (Preview)
+
+ExpressRoute Traffic Collector enables sampling of network flows sent over your ExpressRoute Direct circuits. Flow logs get sent to a [Log Analytics workspace](../azure-monitor/logs/log-analytics-overview.md) where you can create your own log queries for further analysis, export the data to any visualization tool or SIEM (Security Information and Event Management) of your choice. Flow logging can be enabled for both private peering and Microsoft peering with ExpressRoute Traffic Collector.
+
+> [!IMPORTANT]
+> ExpressRoute Traffic Collector is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Use cases
+
+Flow logs can help you derive various traffic insights. Most common use cases are:
+
+### Network monitoring
+
+- Monitor Azure private peering and Microsoft peering traffic
+- Near real-time visibility into network throughput and performance
+- Perform network diagnosis
+- Capacity forecasting
+
+### Monitor network usage and cost optimization
+
+- Analyze traffic trends by filtering sampled flows by IP, port or by applications
+- Top talkers for a source IP, destination IP or applications
+- Optimize network traffic expenses by analyzing traffic trends
+
+### Network forensics analysis
+
+- Identify compromised IPs by analyzing all the associated network flows
+- Export flow logs to a SIEM (Security Information and Event Management) tool to monitor, correlate events, generate security alerts
+
+## Flow log collection and sampling
+
+ExpressRoute Traffic Collector enables flow collection for Azure private peering and Microsoft peering. Flow logs are collected every minute. All packets collected for a given flow gets aggregated and imported into a Log Analytics workspace for further analysis. During flow collection, not every packet is captured into its own flow record. ExpressRoute Traffic Collector uses a sampling rate of 1:4096, meaning 1 out of every 4096 packets gets captured. Therefore, sampling rate short flows (in total bytes) may not get collected. This sampling size doesn't affect network traffic analysis when sampled data is aggregated over a longer period of time. Flow collection time and sampling rate are fixed and can't be changed.
+
+## Flow log schema
+
+| Column | Type | Description |
+| - | -- | |
+| ATCRegion | string | ExpressRoute Traffic Collector (ATC) deployment region. |
+| ATCResourceId | string | Azure resource ID of ExpressRoute Traffic Collector (ATC). |
+| BgpNextHop | string | Border Gateway Protocol (BGP) next hop as defined in the routing table. |
+| DestinationIp | string | Destination IP address. |
+| DestinationPort | int | TCP destination port. |
+| Dot1qCustomerVlanId | int | Dot1q Customer VlanId. |
+| Dot1qVlanId | int | Dot1q VlanId. |
+| DstAsn | int | Destination Autonomous System Number (ASN). |
+| DstMask | int | Mask of destination subnet. |
+| DstSubnet | string | Destination subnet of destination IP. |
+| ExRCircuitDirectPortId | string | Azure resource ID of Express Route Circuit's direct port. |
+| ExRCircuitId | string | Azure resource ID of Express Route Circuit. |
+| ExRCircuitServiceKey | string | Service key of Express Route Circuit. |
+| FlowRecordTime | datetime | Timestamp (UTC) when Express Route Circuit emitted this flow record. |
+| Flowsequence | long | Flow sequence of this flow. |
+| IcmpType | int | Protocol type as specified in IP header. |
+| IpClassOfService | int | IP Class of service as specified in IP header. |
+| IpProtocolIdentifier | int | Protocol type as specified in IP header. |
+| IpVerCode | int | IP version as defined in the IP header. |
+| MaxTtl | int | Maximum time to live (TTL) as defined in the IP header. |
+| MinTtl | int | Minimum time to live (TTL) as defined in the IP header. |
+| NextHop | string | Next hop as per forwarding table. |
+| NumberOfBytes | long | Total number of bytes of packets captured in this flow. |
+| NumberOfPackets | long | Total number of packets captured in this flow. |
+| OperationName | string | The specific ExpressRoute Traffic Collector operation that emitted this flow record. |
+| PeeringType | string | Express Route Circuit peering type. |
+| Protocol | int | Protocol type as specified in IP header. |
+| \_ResourceId | string | A unique identifier for the resource that the record is associated with |
+| SchemaVersion | string | Flow record schema version. |
+| SourceIp | string | Source IP address. |
+| SourcePort | int | TCP source port. |
+| SourceSystem | string | |
+| SrcAsn | int | Source Autonomous System Number (ASN). |
+| SrcMask | int | Mask of source subnet. |
+| SrcSubnet | string | Source subnet of source IP. |
+| \_SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
+| TcpFlag | int | TCP flag as defined in the TCP header. |
+| TenantId | string | |
+| TimeGenerated | datetime | Timestamp (UTC) when the ExpressRoute Traffic Collector emitted this flow record. |
+| Type | string | The name of the table |
+
+## Region availability
+
+ExpressRoute Traffic Collector is supported in the following regions:
+
+- Central US
+- East US
+- East US 2
+- North Central US
+- South Central US
+- West Central US
+- West US
+- West US 2
+- West US 3
+
+## Next steps
+
+- Learn how to [set up ExpressRoute Traffic Collector](how-to-configure-traffic-collector.md).
+- [ExpressRoute Traffic Collector FAQ](../expressroute/expressroute-faqs.md#expressroute-traffic-collector).
governance NZ_ISM_Restricted_V3_5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/NZ_ISM_Restricted_v3_5.md
Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
This built-in initiative is deployed as part of the
||||| |[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) |
-### Ensure server parameter 'log_disconnections' is set to 'ON' for PostgreSQL Database Server
+### Ensure server parameter 'connection_throttling' is set to 'ON' for PostgreSQL Database Server
-**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.15
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.17
**Ownership**: Customer |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
This built-in initiative is deployed as part of the
|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | |[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
-### Ensure that Register with Azure Active Directory is enabled on App Service
+### Ensure the web app has 'Client Certificates (Incoming client certificates)' set to 'On'
-**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.5
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.4
**Ownership**: Customer |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark (Azure Government) description: Details of the Azure Security Benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Rbi_Itf_Nbfc_V2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi_itf_nbfc_v2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/12/2022 Last updated : 08/17/2022
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
description: Set up Hadoop, Kafka, Spark, or HBase clusters for HDInsight from a
Previously updated : 08/04/2022 Last updated : 08/17/2022 # Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more
For more information, see [Sizes for virtual machines](../virtual-machines/sizes
### Disk attachment
-HDInsight cluster comes with pre-defined disk space based on SKU. This space may not be sufficient in large job scenarios.
+HDInsight cluster comes with pre-defined disk space based on SKU. Running some large applications, can lead to insufficient disk space, (with disk full error - ```LinkId=221672#ERROR_NOT_ENOUGH_DISK_SPACE```) and job failures.
-This new feature allows you to add more disks in cluster, which will be used as node manager local directory. Add number of disks to worker nodes during HIVE and Spark cluster creation, while the selected disks will be part of node managerΓÇÖs local directories.
-
-On each of the **NodeManager** machines, **LocalResources** are ultimately localized in the target directories.
-
-By normal configuration only the default disk is added as the local disk in NodeManager. For large applications this disk space may not be enough which can result in job failure.
-
-If the cluster is expected to run large data application, you can choose to add extra disks to the **NodeManager**.
-
-You can add number of disks per VM and each disk will be of 1 TB size.
+More discs can be added to the cluster using the new feature **NodeManager**ΓÇÖs local directory. At the time of Hive and Spark cluster creation, the number of discs can be selected and added to the worker nodes. The selected disk, which will be of size 1TB each, would be part of **NodeManager**'s local directories.
1. From **Configuration + pricing** tab 1. Select **Enable managed disk** option
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
Title: Use selective logging feature with script action in Azure HDInsight clusters
-description: Learn how to use Selective logging feature using script action to monitor logs.
+ Title: Use selective logging with a script action in Azure HDInsight clusters
+description: Learn how to use the selective logging feature with a script action to monitor logs.
Last updated 07/31/2022
-# Learn how to use selective logging feature with script action in Azure HDInsight
+# Use selective logging with a script action in Azure HDInsight
-[Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) is an Azure Monitor service that monitors your cloud and on-premises environments. The monitoring is to maintain their availability and performance. It collects data generated by resources in your cloud, on-premises environments and from other monitoring tools. The data is used to provide analysis across multiple sources by enabling selective logging feature using script action in Azure portal in HDInsight.
+[Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md) is an Azure Monitor service that monitors your cloud and on-premises environments. The monitoring helps maintain their availability and performance.
+
+Azure Monitor Logs collects data generated by resources in your cloud, resources in on-premises environments, and other monitoring tools. It uses the data to provide analysis across multiple sources. To get the analysis, you enable the selective logging feature by using a script action for HDInsight in the Azure portal.
## About selective logging
-Selective logging is a part of Azure overall monitoring system. You can connect your cluster to a log analytics workspace. Once enabled, you can see the logs and metrics like HDInsight Security Logs, Yarn Resource Manager, System Metrics etc. You can monitor workloads and see how they're affecting cluster stability.
-Selective logging allows you to enable/disable all the tables or enable selective tables in log analytics workspace. You can adjust the source type for each table, since in new version of Geneva monitoring one table has multiple sources.
+Selective logging is a part of the overall monitoring system in Azure. After you connect your cluster to a Log Analytics workspace and enable selective logging, you can see logs and metrics like HDInsight security logs, Yarn Resource Manager, and system metrics. You can monitor workloads and see how they're affecting cluster stability.
+
+Selective logging allows you to enable or disable all the tables, or enable selected tables, in the Log Analytics workspace. You can adjust the source type for each table.
> [!NOTE]
-> If log analytics is reinstalled in a cluster, then, you'll have to disable all the tables/log types again, since the reinstallation resets all the configuration files to its original state.
+> If Log Analytics is reinstalled in a cluster, you'll have to disable all the tables and log types again. Reinstallation resets all the configuration files to their original state.
-## Using script action
+## Considerations for script actions
-* The Geneva monitoring system uses mdsd(MDS daemon) which is a monitoring agent and fluentd for collecting logs using unified logging layer.
-* Selective Logging uses script action to disable/enable tables and their log types. Since it doesn't open any new ports or change any existing security setting hence, there are no security changes.
-* Script Action runs in parallel on all specified nodes and changes the configuration files for disabling/enabling tables and their log types.
+* The monitoring system uses the Metadata Server Daemon (a monitoring agent) and Fluentd for collecting logs by using a unified logging layer.
+* Selective logging uses a script action to disable or enable tables and their log types. Because selective logging doesn't open any new ports or change any existing security settings, there are no security changes.
+* The script action runs in parallel on all specified nodes and changes the configuration files for disabling or enabling tables and their log types.
## Prerequisites
-* A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Log Analytics workspace](../azure-monitor/vm/monitor-virtual-machine.md).
-* An Azure HDInsight cluster. Currently, you can use selective logging feature with the following HDInsight cluster types:
+* A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor Logs environment with its own data repository, data sources, and solutions. For instructions, see [Create a Log Analytics workspace](../azure-monitor/vm/monitor-virtual-machine.md).
+* An Azure HDInsight cluster. Currently, you can use the selective logging feature with the following HDInsight cluster types:
* Hadoop * HBase * Interactive Query * Spark
-For the instructions on how to create an HDInsight cluster, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md).
+For instructions on how to create an HDInsight cluster, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md).
+
+## Enable or disable logs by using a script action for multiple tables and log types
-## Enable/disable logs using script action for multiple tables and log types
+1. Go to **Script actions** in your cluster and select **Submit now** to start the process of creating a script action.
-1. Go to script action in your cluster and create a new Script Action for disabling/enabling table and log type.
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-submit-script-action.png" alt-text="Screenshot that shows the button for starting the process of creating a script action.":::
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-submit-script-action.png" alt-text="Screenshot showing select submit script action.":::
+ The **Submit script action** pane appears.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/submit-script-action-window.png" alt-text="Screenshot showing submit script action window.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/submit-script-action-window.png" alt-text="Screenshot that shows the pane for submitting a script action.":::
-1. In the script type, select **custom**.
-1. Name the script. For example, **Disable two tables and two sources**.
-1. Bash Script URL must be the link of the [selectiveLoggingScript.sh](https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh).
-1. Select all the nodes of the cluster. For example, Head Node, Worker node, Zookeepr node.
-1. Define the parameters in the parameter box. For example:
+1. For the script type, select **Custom**.
+1. Name the script. For example: **Disable two tables and two sources**.
+1. The Bash script URI must be a link to [selectiveLoggingScript.sh](https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh).
+1. Select all the node types that apply for the cluster. The options are head node, worker node, and ZooKeeper node.
+1. Define the parameters. For example:
- Spark: `spark HDInsightSparkLogs:SparkExecutorLog --disable`
- - Interactivehive: `interactivehive HDInsightSparkLogs:SparkExecutorLog --enable`
+ - Interactive Query: `interactivehive HDInsightSparkLogs:SparkExecutorLog --enable`
- Hadoop: `hadoop HDInsightSparkLogs:SparkExecutorLog --disable` - HBase: `hbase HDInsightSparkLogs: HDInsightHBaseLogs --enable`
- For more details, see [Parameters](#parameters-syntax) section.
+ For more information, see the [Parameter syntax](#parameter-syntax) section.
-1. Select Create.
-1. After a few minutes, you'll see a green tick next to your script action history, which means script has successfully run.
+1. Select **Create**.
+1. After a few minutes, a green check mark appears next to your script action history. It means the script has successfully run.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/enable-table-and-log-types.png" alt-text="Screenshot showing enable table and log types.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/enable-table-and-log-types.png" alt-text="Screenshot that shows a successful run of a script to enable tables and log types.":::
-You will see the desired changes in the log analytics workspace.
+You'll see your changes in the Log Analytics workspace.
## Troubleshooting
-### Scenario 1
+### No changes appear in the Log Analytics workspace
-If Script Action is submitted but there are no changes in the log analytics workspace.
+If you submit your script action but there are no changes in the Log Analytics workspace:
-1. Go to Ambari Home and check debug information.
+1. Under **Dashboards**, select **Ambari home** to check the debug information.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-dashboard-ambari-home.png" alt-text="Screenshot showing select dashboard ambari home.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-dashboard-ambari-home.png" alt-text="Screenshot that shows the location of the Ambari home dashboard.":::
-1. Select settings button.
+1. Select the **Settings** button.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/ambari-dash-board.png" alt-text="Screenshot showing ambari dash board.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/ambari-dash-board.png" alt-text="Screenshot that shows the Settings button.":::
-1. You will get your latest script run at the top of the list.
+1. Select your latest script run at the top of the list of background operations.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations.png" alt-text="Screenshot showing background operations.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations.png" alt-text="Screenshot that shows background operations.":::
1. Verify the script run status in all the nodes individually.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations-all.png" alt-text="Screenshot showing background operations all.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations-all.png" alt-text="Screenshot that shows the script run status for hosts.":::
-1. Check if the parameter syntax from the parameter syntax section is correct.
-1. Check if the log analytics workspace is connected to the cluster and log analytics monitoring is turned on.
-1. Check if the script that you run from script action was checked as persisted.
+1. Check that the parameter syntax from the parameter syntax section is correct.
+1. Check that the Log Analytics workspace is connected to the cluster and that Log Analytics monitoring is turned on.
+1. Check that you selected the **Persist this script action to rerun when new nodes are added to the cluster** checkbox for the script action that you ran.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/script-action-persists.png" alt-text="Screenshot showing script action persists.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/script-action-persists.png" alt-text="Screenshot that shows the checkbox for persisting a script action.":::
-1. It's possible, that a new node has been added to the cluster recently.
+1. See if a new node has been added to the cluster recently.
> [!NOTE]
- > For the script to run in the latest cluster, and the script must persist the script.
+ > For the script to run in the latest cluster, the script must persist.
-1. Make sure all the node types are selected while running the script action.
+1. Make sure that you selected all the node types that you wanted for the script action.
- :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-node-types.png" alt-text="Screenshot showing select node types.":::
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-node-types.png" alt-text="Screenshot that shows selected node types.":::
-### Scenario 2
+### The script action failed
-If the script action is showing a Failed status in the script action history
+If the script action shows a failure status in the script action history:
-1. Make sure the parameter syntax is correct while using the parameter syntax section.
-1. Check that the script link is correct.
-1. Correct link for the script: https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh
+1. Check that the parameter syntax from the parameter syntax section is correct.
+1. Check that the script link is correct. It should be: `https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh`.
## Table names ### Spark cluster
-Different log types(sources) inside **Spark** tables
+The following table names are for different log types (sources) inside Spark tables.
-| S.no | Table Name | Log Types | Description |
+| Source number | Table name | Log types | Description |
| | | | |
-| 1. | HDInsightAmbariCluster Alerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
-| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
-| 3. | HDInsightHadoopAnd YarnLogs | Head Node: MRJobSummary, Resource Manager, TimelineServer Worker Node: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightSecurityLogs | AmbariAuditLog, AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
-| 5. | HDInsightSparkLogs | Head Node: JupyterLog, LivyLog, SparkThriftDriverLog Worker Node: SparkExecutorLog, SparkDriverLog | This table contains all logs related to Spark and its related component: Livy and Jupyter. |
-| 6. | HDInsightHadoopAnd YarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 1. | HDInsightAmbariCluster Alerts | No log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
+| 3. | HDInsightHadoopAnd YarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightSecurityLogs | AmbariAuditLog, AuthLog | This table contains records from the Ambari audit and authentication logs. |
+| 5. | HDInsightSparkLogs | **Head node**: JupyterLog, LivyLog, SparkThriftDriverLog **Worker node**: SparkExecutorLog, SparkDriverLog | This table contains all logs related to Spark and its related components: Livy and Jupyter. |
+| 6. | HDInsightHadoopAnd YarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
| 7. | HDInsightOozieLogs | Oozie | This table contains all logs generated from the Oozie framework. |
-### Interactive query cluster
+### Interactive Query cluster
-Different log types(sources) inside **interactive query** tables
+The following table names are for different log types (sources) inside Interactive Query tables.
-| S.no | Table Name | Log Types | Description |
+| Source number | Table name | Log types | Description |
| | | | |
-| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
-| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
-| 3. | HDInsightHadoopAndYarnLogs | **Head Node** : MRJobSummary, Resource Manager, TimelineServer **WorkerNode:** NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
-| 5. | HDInsightHiveAndLLAPLogs | Head Node: InteractiveHiveHSILog, InteractiveHiveMetastoreLog, ZeppelinLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
+| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 5. | HDInsightHiveAndLLAPLogs | **Head node**: InteractiveHiveHSILog, InteractiveHiveMetastoreLog, ZeppelinLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
| 6. | HDInsightHiveAndLLAPmetrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. | | 7. | HDInsightHiveTezAppStats | No log types |
-| 8. | HDInsightSecurityLogs | **Head Node:** AmbariAuditLog, AuthLog **Zookeeper Node, Worker Node:** AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+| 8. | HDInsightSecurityLogs | **Head node**: AmbariAuditLog, AuthLog **ZooKeeper node, worker node**: AuthLog | This table contains records from the Ambari audit and authentication logs. |
### HBase cluster
-Different log types(sources) inside **HBase** tables
+The following table names are for different log types (sources) inside HBase tables.
-| S.no | Table Name | Log Types | Description |
+| Source number | Table name | Log types | Description |
| | | | |
-| 1. | HDInsightAmbariClusterAlerts | No other log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
-| 2. | HDInsightAmbariSystem Metrics | No other log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
-| 3. | HDInsightHadoopAndYarnLogs | **Head Node** : MRJobSummary, Resource Manager, TimelineServer **WorkerNode:** NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightSecurityLogs | **Head Node:** AmbariAuditLog, AuthLog **Worker Node:** AuthLog **ZooKeper Node:** AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
-| 5. | HDInsightHBaseLogs | **Head Node** : HDFSGarbageCollectorLog, HDFSNameNodeLog **WorkerNode:** PhoenixServerLog, HBaseRegionServerLog, HBaseRestServerLog **Zookeeper Node:** HBaseMasterLog | This table contains logs from HBase and its related components: Phoenix and HDFS. |
+| 1. | HDInsightAmbariClusterAlerts | No other log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
+| 2. | HDInsightAmbariSystem Metrics | No other log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightSecurityLogs | **Head node**: AmbariAuditLog, AuthLog **Worker node**: AuthLog **ZooKeeper node**: AuthLog | This table contains records from the Ambari audit and authentication logs. |
+| 5. | HDInsightHBaseLogs | **Head node**: HDFSGarbageCollectorLog, HDFSNameNodeLog **Worker node**: PhoenixServerLog, HBaseRegionServerLog, HBaseRestServerLog **ZooKeeper node**: HBaseMasterLog | This table contains logs from HBase and its related components: Phoenix and HDFS. |
| 6. | HDInsightHBaseMetrics | No log types | This table contains JMX metrics from HBase. It contains all the same JMX metrics from the tables listed in the Old Schema column. In contrast from the old tables, each row contains one metric. |
-| 7. | HDInsightHadoopAndYarn Metrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 7. | HDInsightHadoopAndYarn Metrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
### Hadoop cluster
-Different log types(sources) inside **Hadoop** tables
+The following table names are for different log types (sources) inside Hadoop tables.
-| S.no | Table Name | Log Types | Description |
+| Source number | Table name | Log types | Description |
| | | | |
-| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
-| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
-| 3. | HDInsightHadoopAndYarnLogs | **Head Node:** MRJobSummary, Resource Manager, TimelineServer Worker Node: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
-| 4. | HDInsightHadoopAndYarnMetrics | No Log Types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
-| 5. | HDInsightHiveAndLLAPLogs | **Head Node:** HiveMetastoreLog, HiveServer2Log, WebHcatLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
+| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari cluster alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two head nodes. Each metric is now a column, and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head node**: MRJobSummary, Resource Manager, TimelineServer **Worker node**: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics that we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 5. | HDInsightHiveAndLLAPLogs | **Head node**: HiveMetastoreLog, HiveServer2Log, WebHcatLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
| 6. | HDInsight Hive And LLAP Metrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
-| 7. | HDInsight Security Logs | Head Node: AmbariAuditLog, AuthLog Zookeeper Node: AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+| 7. | HDInsight Security Logs | **Head node**: AmbariAuditLog, AuthLog **ZooKeeper node**: AuthLog | This table contains records from the Ambari audit and authentication logs. |
-## Parameters syntax
+## Parameter syntax
-Parameters define the cluster type, table names, source names and the action.
+Parameters define the cluster type, table names, source names, and action.
-Parameter contains three parts:
+A parameter contains three parts:
- Cluster type-- Tables and Log types-- Action (The action can be either `--disable` or `--enable`.)
+- Tables and log types
+- Action (either `--disable` or `--enable`)
-* Multiple tables syntax
-Rule: The tables are separated with a (,) or comma.
+### Syntax for multiple tables
-For example,
+When you have multiple tables, they're separated with a comma. For example:
`spark HDInsightSecurityLogs, HDInsightAmbariSystemMetrics --disable` `hbase HDInsightSecurityLogs, HDInsightAmbariSystemMetrics --enable`
-> [!NOTE]
-> The tables are separated with a comma.
+### Syntax for multiple source types or log types
-* Multiple source type/log type
-Rule:The source types/log types are separated with a space.
-Rule:For disabling a source the user must write the table name in which the log types is then followed by a colon, then the real log type name.
-TableName : LogTypeName
+When you have multiple source types or log types, they're separated with a space.
-For example,
+To disable a source, write the table name that contains the log types, followed by a colon and then the real log type name:
-spark HDInsightSecurityLogs is a table, which has two log types AmbariAuditLog and AuthLog.
-For Disabling both the log types the correct syntax would be:
-spark HDInsightSecurityLogs: AmbariAuditLog AuthLog --disable
+`TableName : LogTypeName`
-> [!NOTE]
->* The source/log types are separated by a space.
->* Table and its source types are separated by a colon.
+For example, assume that `spark HDInsightSecurityLogs` is a table that has two log types: `AmbariAuditLog` and `AuthLog`. To disable both the log types, the correct syntax would be:
-* Multiple tables and source types
-If there are two tables and two source types, which we need to be disabled
+`spark HDInsightSecurityLogs: AmbariAuditLog AuthLog --disable`
-- Spark: InteractiveHiveMetastoreLog logtype in HDInsightHiveAndLLAPLogs table-- Hbase: InteractiveHiveHSILog logtype in HDInsightHiveAndLLAPLogs table-- Hadoop: HDInsightHiveAndLLAPMetrics table-- Hadoop: HDInsightHiveTezAppStats table
+### Syntax for multiple tables and source types
-Correct Parameter syntax for such cases would be
+If you need to disable two tables and two source types, use the following syntax:
+
+- Spark: `InteractiveHiveMetastoreLog` log type in the `HDInsightHiveAndLLAPLogs` table
+- Hbase: `InteractiveHiveHSILog` log type in the `HDInsightHiveAndLLAPLogs` table
+- Hadoop: `HDInsightHiveAndLLAPMetrics` table
+- Hadoop: `HDInsightHiveTezAppStats` table
+
+Separate the tables with a comma. Denote sources by using a colon after the table name in which they reside.
+
+The correct parameter syntax for these cases would be:
``` interactivehive HDInsightHiveAndLLAPLogs: InteractiveHiveMetastoreLog, HDInsightHiveAndLLAPMetrics, HDInsightHiveTezAppStats, HDInsightHiveAndLLAPLogs: InteractiveHiveHSILog --enable ```
-> [!NOTE]
->* Different tables are separated with a comma(,).
->* Sources are denoted with a colon(:) after the table name in which they reside.
- ## Next steps
-* [Query Azure Monitor logs to monitor HDInsight clusters](hdinsight-hadoop-oms-log-analytics-use-queries.md)
-* [How to monitor cluster availability with Apache Ambari and Azure Monitor logs](./hdinsight-cluster-availability.md)
+* [Query Azure Monitor Logs to monitor HDInsight clusters](hdinsight-hadoop-oms-log-analytics-use-queries.md)
+* [Monitor cluster availability with Apache Ambari and Azure Monitor Logs](./hdinsight-cluster-availability.md)
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
In this step, browse to your FHIR service in the Azure portal and select the **I
## Give permission in the storage account for FHIR service access
-1. Go to your ADLS Gen2 storage account in the Azure portal.
+1. Go to your ADLS Gen2 account in the Azure portal.
2. Select **Access control (IAM)**.
In this step, browse to your FHIR service in the Azure portal and select the **I
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
-Now you're ready to configure the FHIR service with the ADLS Gen2 account as the default storage account for export.
+Now you're ready to configure the FHIR service by setting the ADLS Gen2 account as the default storage account for export.
## Specify the storage account for FHIR service export
After running this command, in the **Firewall** section under **Resource instanc
:::image type="content" source="media/export-data/storage-networking-2.png" alt-text="Screenshot of Azure Storage Networking Settings with resource type and instance names." lightbox="media/export-data/storage-networking-2.png":::
-You're now ready to securely export FHIR data to the storage account. Note that the storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable private endpoints for the storage account.
+You're now ready to securely export FHIR data to the storage account. Note that the storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable [private endpoints](../../storage/common/storage-private-endpoints.md) for the storage account.
### Allowing specific IP addresses from other Azure regions to access the Azure storage account
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
You will need to create a container for the de-identified export in your ADLS Ge
|Query parameter | Example |Optionality| Description| |||--||
-| `anonymizationConfig` |`anonymizationConfig.json`|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named `anonymization` within the ADLS Gen2 account that is configured as the export location. |
-| `anonymizationConfigEtag`|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure Storage Explorer from the blob property.|
+| `anonymizationConfig` |`anonymizationConfig.json`|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named `anonymization` within the configured ADLS Gen2 account. |
+| `anonymizationConfigEtag`|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag from the blob property using Azure Storage Explorer.|
> [!IMPORTANT] > Both the raw export and de-identified export operations write to the same Azure storage account specified in the export configuration for the FHIR service. If you have need for multiple de-identification configurations, it is recommended that you create a different container for each configuration and manage user access at the container level.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Before attempting to use `$export`, make sure that your FHIR service is configur
## Calling the `$export` endpoint
-After setting up the FHIR service to connect with an ADLS Gen2 account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`) within the ADLS Gen2 account. Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request.
+After setting up the FHIR service to connect with an ADLS Gen2 account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`). Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request.
``` GET {{fhirurl}}/$export?_container={{containerName}}
For general information about the FHIR `$export` API spec, please see the [HL7 F
**Jobs stuck in a bad state**
-In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
+In some situations, there's a potential for a job to be stuck in a bad state while the FHIR service is attempting to export data. This can occur especially if the ADLS Gen2 account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
> [!NOTE] > In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service will stop the operation and move to a new job.
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 06/27/2022 Last updated : 08/03/2022 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-08-01
+
+### Azure Machine Learning SDK for Python v1.44.0
+
+ + **azureml-automl-dnn-nlp**
+ + Weighted accuracy and Matthews correlation coefficient (MCC) will no longer be a metric displayed on calculated metrics for NLP Multilabel classification.
+ + **azureml-automl-dnn-vision**
+ + Raise user error when invalid annotation format is provided
+ + **azureml-cli-common**
+ + Updated the v1 CLI description
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Fixed the "Failed to calculate TCN metrics." issues caused for TCNForecaster when different timeseries in the validation dataset have different lengths.
+ + Added auto timeseries ID detection for DNN forecasting models like TCNForecaster.
+ + Fixed a bug with the Forecast TCN model where validation data could be corrupted in some circumstances when the user provided the validation set.
+ + **azureml-core**
+ + Allow setting a timeout_seconds parameter when downloading artifacts from a Run
+ + Warning message added - Azure ML CLI v1 is getting retired on 30 Sep 2025. Users are recommended to adopt CLI v2.
+ + Fix submission to non-AmlComputes throwing exceptions.
+ + Added docker context support for environments
+ + **azureml-interpret**
+ + Increase numpy version for AutoML packages
+ + **azureml-pipeline-core**
+ + Fix regenerate_outputs=True not taking effect when submit pipeline.
+ + **azureml-train-automl-runtime**
+ + Increase numpy version for AutoML packages
+ + Enable code generation for vision and nlp
+ + Original columns on which grains are created are added as part of predictions.csv
+ ## 2022-07-21 ### Announcing end of support for Python 3.6 in AzureML SDK v1 packages
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-automl-dnn-nlp** + Remove duplicate labels column from multi-label predictions + **azureml-contrib-automl-pipeline-steps**
- + Many Models now provides the capability to generate prediction output in csv format as well. - Many Models prediction will now include column names in the output file in case of **csv** file format.
+ + Many Models now provide the capability to generate prediction output in csv format as well. - Many Models prediction will now include column names in the output file in case of **csv** file format.
+ **azureml-core** + ADAL authentication is now deprecated and all authentication classes now use MSAL authentication. Please install azure-cli>=2.30.0 to utilize MSAL based authentication when using AzureCliAuthentication class. + Added fix to force environment registration when `Environment.build(workspace)`. The fix solves confusion of the latest environment built instead of the asked one when environment is cloned or inherited from another instance.
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset. + **azureml-train-automl-runtime** + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
- + Automatic cross-validation parameter configuration is now available for automl forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and automl will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
+ + Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
+ Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary.
- + Enabled forecasting model endpoints with quantiles support to be consumed in PowerBI.
- + Updated automl scipy dependency upper bound to 1.5.3 from 1.5.2
+ + Enabled forecasting model endpoints with quantiles support to be consumed in Power BI.
+ + Updated AutoML scipy dependency upper bound to 1.5.3 from 1.5.2
## 2022-04-25
This breaking change comes from the June release of `azureml-inference-server-ht
+ **azureml-core** + * Return logs for runs that went through our new runtime when calling any of the get logs function on the run object, including `run.get_details`, `run.get_all_logs`, etc. + Added experimental method Datastore.register_onpremises_hdfs to allow users to create datastores pointing to on-premises HDFS resources.
- + Updating the cli documentation in the help command
+ + Updating the CLI documentation in the help command
+ **azureml-interpret** + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update. + **azureml-mlflow**
This breaking change comes from the June release of `azureml-inference-server-ht
+ Adding min-label-classes check for both classification tasks (multi-class and multi-label). It will throw an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It is meaningless to run classification on fewer than two classes. + **azureml-automl-runtime** + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
- + Automl training now supports numpy version 1.8.
+ + AutoML training now supports numpy version 1.8.
+ **azureml-contrib-automl-dnn-forecasting** + Fixed a bug in the TCNForecaster model where not all training data would be used when cross-validation settings were provided. + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method would not use the most recent context data in train-valid scenarios.
This breaking change comes from the June release of `azureml-inference-server-ht
+ Fix the issue that magic widget is disappeared. + **azureml-train-automl-runtime** + Updating AutoML dependencies to support Python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
- + Automl training now supports numpy version 1.19
- + Fix automl reset index logic for ensemble models in automl_setup_model_explanations API
- + In automl, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
+ + AutoML training now supports numpy version 1.19
+ + Fix AutoML reset index logic for ensemble models in automl_setup_model_explanations API
+ + In AutoML, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
+ All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the 'outputs/` directory on the parent run.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
## 2021-03-31
-### Azure Machine Learning Studio Notebooks Experience (March Update)
+### Azure Machine Learning studio Notebooks Experience (March Update)
+ **New features** + Render CSV/TSV. Users will be able to render and TSV/CSV file in a grid format for easier data analysis. + SSO Authentication for Compute Instance. Users can now easily authenticate any new compute instances directly in the Notebook UI, making it easier to authenticate and use Azure SDKs directly in AzureML.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Fixed show_output=False to return control to the user when running using spark. ## 2021-02-28
-### Azure Machine Learning Studio Notebooks Experience (February Update)
+### Azure Machine Learning studio Notebooks Experience (February Update)
+ **New features** + [Native Terminal (GA)](./how-to-access-terminal.md). Users will now have access to an integrated terminal as well as Git operation via the integrated terminal. + Notebook Snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
### Azure Machine Learning SDK for Python v1.23.0 + **Bug fixes and improvements** + **azureml-core**
- + [Experimental feature] Add support to link synapse workspace into AML as an linked service
+ + [Experimental feature] Add support to link synapse workspace into AML as a linked service
+ [Experimental feature] Add support to attach synapse spark pool into AML as a compute + [Experimental feature] Add support for identity based data access. Users can register datastore or datasets without providing credentials. In such case, users' Azure AD token or managed identity of compute target will be used for authentication. To learn more, see [Connect to storage by using identity-based data access](./how-to-identity-based-data-access.md). + **azureml-pipeline-steps**
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
## 2021-01-31
-### Azure Machine Learning Studio Notebooks Experience (January Update)
+### Azure Machine Learning studio Notebooks Experience (January Update)
+ **New features** + Native Markdown Editor in AzureML. Users can now render and edit markdown files natively in AzureML Studio. + [Run Button for Scripts (.py, .R and .sh)](./how-to-run-jupyter-notebooks.md#run-a-notebook-or-python-script). Users can easily now run Python, R and Bash script in AzureML
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
## 2020-12-31
-### Azure Machine Learning Studio Notebooks Experience (December Update)
+### Azure Machine Learning studio Notebooks Experience (December Update)
+ **New features** + User Filename search. Users are now able to search all the files saved in a workspace. + Markdown Side by Side support per Notebook Cell. In a notebook cell, users can now have the option to view rendered markdown and markdown syntax side-by-side.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Deprecated the use of Mpi as a valid type of input for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig. ## 2020-11-30
-### Azure Machine Learning Studio Notebooks Experience (November Update)
+### Azure Machine Learning studio Notebooks Experience (November Update)
+ **New features** + Native Terminal. Users will now have access to an integrated terminal as well as Git operation via the [integrated terminal.](./how-to-access-terminal.md) + Duplicate Folder
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Improved error message to include potential fixes when a dataset is incorrectly passed to an experiment (e.g. ScriptRunConfig). + Improved documentation for `OutputDatasetConfig.register_on_complete` to include the behavior of what will happen when the name already exists. + Specifying dataset input and output names that have the potential to collide with common environment variables will now result in a warning
- + Repurposed `grant_workspace_access` parameter when registering datastores. Set it to `True` to access data behind virtual network from Machine Learning Studio.
+ + Repurposed `grant_workspace_access` parameter when registering datastores. Set it to `True` to access data behind virtual network from Machine Learning studio.
[Learn more](./how-to-enable-studio-virtual-network.md) + Linked service API is refined. Instead of providing resource ID, we have 3 separate parameters sub_id, rg, and name defined in configuration. + In order to enable customers to self-resolve token corruption issues, enable workspace token synchronization to be a public method.
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ **azureml-train-automl-client** + Fixed an issue where get_output may raise an XGBoostError.
-### Azure Machine Learning Studio Notebooks Experience (October Update)
+### Azure Machine Learning studio Notebooks Experience (October Update)
+ **New features** + [Full virtual network support](./how-to-enable-studio-virtual-network.md) + [Focus Mode](./how-to-run-jupyter-notebooks.md#focus-mode)
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Updated run.log_table to allow individual rows to be logged. + Added static method `Run.get(workspace, run_id)` to retrieve a run only using a workspace + Added instance method `Workspace.get_run(run_id)` to retrieve a run within the workspace
- + Introducing command property in run configuration which will enables users to submit command instead of script & arguments.
+ + Introducing command property in run configuration which will enable users to submit command instead of script & arguments.
+ **azureml-interpret** + fixed explanation client is_raw flag behavior in azureml-interpret + **azureml-sdk** + `azureml-sdk` officially support Python 3.8. + **azureml-train-core** + Adding TensorFlow 2.3 curated environment
- + Introducing command property in run configuration which will enables users to submit command instead of script & arguments.
+ + Introducing command property in run configuration which will enable users to submit command instead of script & arguments.
+ **azureml-widgets** + Redesigned interface for script run widget.
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Reverting PyTorch Default Version to 1.4. + Adding PyTorch 1.6 & TensorFlow 2.2 images and curated environment.
-### Azure Machine Learning Studio Notebooks Experience (August Update)
+### Azure Machine Learning studio Notebooks Experience (August Update)
+ **New features** + New Getting started landing Page
Learn more about [image instance segmentation labeling](how-to-label-data.md).
+ Fix the run failures happening when the lookback features are enabled and the data contain short grains. + Fixed the issue with duplicated time index error message when lags or rolling windows were set to 'auto'. + Fixed the issue with Prophet and Arima models on data sets, containing the lookback features.
- + Added support of dates before 1677-09-21 or after 2262-04-11 in columns other then date time in the forecasting tasks. Improved error messages.
+ + Added support of dates before 1677-09-21 or after 2262-04-11 in columns other than date time in the forecasting tasks. Improved error messages.
+ The forecasting parameters documentation was improved. The lag_length parameter was deprecated. + Better exception message on featurization step fit_transform() due to custom transformer parameters. + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Previously updated : 05/02/2022 Last updated : 08/05/2022 # Create an Azure Machine Learning compute cluster
Compute clusters can run jobs securely in a [virtual network environment](how-to
* Some of the scenarios listed in this document are marked as __preview__. Preview functionality is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-* Compute clusters can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. This preview is not available if you are using a private endpoint-enabled workspace.
+* Compute clusters can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. This preview isn't available if you're using a private endpoint-enabled workspace.
> [!WARNING] > When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
Compute clusters can run jobs securely in a [virtual network environment](how-to
* Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](how-to-manage-quotas.md).
-* Azure allows you to place _locks_ on resources, so that they cannot be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure ML compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
+* Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure ML compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
> [!TIP] > Clusters can generally scale up to 100 nodes as long as you have enough quota for the number of cores required. By default clusters are setup with inter-node communication enabled between the nodes of the cluster to support MPI jobs for example. However you can scale your clusters to 1000s of nodes by simply [raising a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest), and requesting to allow list your subscription, or workspace, or a specific cluster for disabling inter-node communication.
The compute autoscales down to zero nodes when it isn't used. Dedicated VMs ar
# [Python](#tab/python) - To create a persistent Azure Machine Learning Compute resource in Python, specify the **vm_size** and **max_nodes** properties. Azure Machine Learning then uses smart defaults for the other properties. * **vm_size**: The VM family of the nodes created by Azure Machine Learning Compute.
Where the file *create-cluster.yml* is:
# [Studio](#tab/azure-studio)
-For information on creating a compute cluster in the studio, see [Create compute targets in Azure Machine Learning studio](how-to-create-attach-compute-studio.md#amlcompute).
+Create a single- or multi- node compute cluster for your training, batch inferencing or reinforcement learning workloads.
+
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under __Manage__, select __Compute__.
+1. If you have no compute resources, select **Create** in the middle of the page.
+
+ :::image type="content" source="media/how-to-create-attach-studio/create-compute-target.png" alt-text="Screenshot that shows creating a compute target":::
+
+1. If you see a list of compute resources, select **+New** above the list.
+
+ :::image type="content" source="media/how-to-create-attach-studio/select-new.png" alt-text="Select new":::
+
+1. In the tabs at the top, select __Compute cluster__
+
+1. Fill out the form as follows:
+
+ |Field |Description |
+ |||
+ | Location | The Azure region where the compute cluster will be created. By default, this is the same location as the workspace. Setting the location to a different region than the workspace is in __preview__, and is only available for __compute clusters__, not compute instances.</br>When using a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. |
+ |Virtual machine type | Choose CPU or GPU. This type can't be changed after creation |
+ |Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be preempted.
+ |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
+
+1. Select **Next** to proceed to **Advanced Settings** and fill out the form as follows:
+
+ |Field |Description |
+ |||
+ |Compute name | * Name is required and must be between 3 to 24 characters long.<br><br> * Valid characters are upper and lower case letters, digits, and the **-** character.<br><br> * Name must start with a letter<br><br> * Name needs to be unique across all existing computes within an Azure region. You'll see an alert if the name you choose isn't unique<br><br> * If **-** character is used, then it needs to be followed by at least one letter later in the name |
+ |Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you won't pay for any nodes when the cluster is idle. |
+ |Maximum number of nodes | Maximum number of nodes that you want to provision. The compute will autoscale to a maximum of this node count when a job is submitted. |
+ | Idle seconds before scale down | Idle time before scaling the cluster down to the minimum node count. |
+ | Enable SSH access | Use the same instructions as [Enable SSH access](#enable-ssh-access) for a compute instance (above). |
+ |Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. Also attach [managed identities](#set-up-managed-identity) to grant access to resources.
+
+1. Select __Create__.
++
+### Enable SSH access
+
+SSH access is disabled by default. SSH access can't be changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
++
+### Connect with SSH access
+
- ## <a id="low-pri-vm"></a> Lower your compute cluster cost
+ ## Lower your compute cluster cost
-You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs do not have guaranteed availability and may be preempted while in use. You will have to restart a preempted job.
+You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs don't have guaranteed availability and may be preempted while in use. You'll have to restart a preempted job.
Use any of these ways to specify a low-priority VM:
To update an existing cluster:
# [Studio](#tab/azure-studio)
-See [Set up managed identity in studio](how-to-create-attach-compute-studio.md#managed-identity).
+During cluster creation or when editing compute cluster details, in the **Advanced settings**, toggle **Assign a managed identity** and specify a system-assigned identity or user-assigned identity.
See [Set up managed identity in studio](how-to-create-attach-compute-studio.md#m
## Troubleshooting
-There is a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create AmlCompute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
+There's a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create AmlCompute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
### Stuck at resizing
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
Title: Create training & deploy computes (studio)
+ Title: Manage training & deploy computes (studio)
-description: Use studio to create training and deployment compute resources (compute targets) for machine learning
+description: Use studio to manage training and deployment compute resources (compute targets) for machine learning.
Previously updated : 07/28/2022 Last updated : 08/11/2022 -+
-# Create compute targets for model training and deployment in Azure Machine Learning studio
+# Manage compute resources for model training and deployment in studio
-In this article, learn how to create and manage compute targets in Azure Machine studio. You can also create and manage compute targets with:
-
-* Azure Machine Learning SDK or CLI extension for Azure Machine Learning
- * [Compute instance](how-to-create-manage-compute-instance.md)
- * [Compute cluster](how-to-create-attach-compute-cluster.md)
- * [Other compute resources](how-to-attach-compute-targets.md)
-* The [VS Code extension](how-to-manage-resources-vscode.md#compute-clusters) for Azure Machine Learning.
-
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+In this article, learn how to manage the compute resources you use for model training and deployment in Azure Machine studio.
## Prerequisites * If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today * An [Azure Machine Learning workspace](quickstart-create-resources.md)
-## What's a compute target?
+## What's a compute target?
-With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-managed-online-endpoints.md).
+With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as _compute targets_). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
-## <a id="portal-view"></a>View compute targets
+## View compute targets
To see all compute targets for your workspace, use the following steps:
To see all compute targets for your workspace, use the following steps:
:::image type="content" source="media/how-to-create-attach-studio/view-compute-targets.png" alt-text="View list of compute targets":::
-## <a id="portal-create"></a>Start creation process
-
-Follow the previous steps to view the list of compute targets. Then use these steps to create a compute target:
-
-1. Select the tab at the top corresponding to the type of compute you will create.
-
-1. If you have no compute targets, select **Create** in the middle of the page.
-
- :::image type="content" source="media/how-to-create-attach-studio/create-compute-target.png" alt-text="Create compute target":::
-
-1. If you see a list of compute resources, select **+New** above the list.
-
- :::image type="content" source="media/how-to-create-attach-studio/select-new.png" alt-text="Select new":::
--
-1. Fill out the form for your compute type:
-
- * [Compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create)
- * [Compute clusters](#amlcompute)
- * [Attached compute](#attached-compute)
-
-1. Select __Create__.
-
-1. View the status of the create operation by selecting the compute target from the list:
-
- :::image type="content" source="media/how-to-create-attach-studio/view-list.png" alt-text="View compute status from a list":::
-
-Follow the steps in [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create).
--
-## <a name="amlcompute"></a> Create compute clusters
-
-Create a single or multi node compute cluster for your training, batch inferencing or reinforcement learning workloads. Use the [steps above](#portal-create) to create the compute cluster. Then fill out the form as follows:
-
-|Field |Description |
-|||
-| Location | The Azure region where the compute cluster will be created. By default, this is the same location as the workspace. Setting the location to a different region than the workspace is in __preview__, and is only available for __compute clusters__, not compute instances.</br>When using a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. |
-|Virtual machine type | Choose CPU or GPU. This type cannot be changed after creation |
-|Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be preempted.
-|Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
-
-Select **Next** to proceed to **Advanced Settings** and fill out the form as follows:
-
-|Field |Description |
-|||
-|Compute name | <li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li> |
-|Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you won't pay for any nodes when the cluster is idle. |
-|Maximum number of nodes | Maximum number of nodes that you want to provision. The compute will autoscale to a maximum of this node count when a job is submitted. |
-| Idle seconds before scale down | Idle time before scaling the cluster down to the minimum node count. |
-| Enable SSH access | Use the same instructions as [Enable SSH access](#enable-ssh) for a compute instance (above). |
-|Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. Also attach [managed identities](#managed-identity) to grant access to resources |
-
-### <a name="enable-ssh"></a> Enable SSH access
+## Compute instance and clusters
-SSH access is disabled by default. SSH access cannot be changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
+You can create compute instances and compute clusters in your workspace, using the Azure Machine Learning SDK, CLI, or studio:
+* [Compute instance](how-to-create-manage-compute-instance.md)
+* [Compute cluster](how-to-create-attach-compute-cluster.md)
-Once the compute cluster is created and running, see [Connect with SSH access](#ssh-access).
+In addition, you can use the [VS Code extension](how-to-manage-resources-vscode.md#compute-clusters) to create compute instances and compute clusters in your workspace.
-### <a name="managed-identity"></a> Set up managed identity
+## Kubernetes cluster
+For information on configuring and attaching a Kubrnetes cluster to your workspace, see [Configure Kubernetes cluster for Azure Machine Learning](how-to-attach-kubernetes-anywhere.md).
-During cluster creation or when editing compute cluster details, in the **Advanced settings**, toggle **Assign a managed identity** and specify a system-assigned identity or user-assigned identity.
+## Other compute targets
-### Managed identity usage
+To use VMs created outside the Azure Machine Learning workspace, you must first attach them to your workspace. Attaching the compute resource makes it available to your workspace.
-
-## <a name="inference-clusters"></a> Create inference clusters
-
-> [!IMPORTANT]
-> Using Azure Kubernetes Service with Azure Machine Learning has multiple configuration options. Some scenarios, such as networking, require additional setup and configuration. For more information on using AKS with Azure ML, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md).
-Create or attach an Azure Kubernetes Service (AKS) cluster for large scale inferencing. Use the [steps above](#portal-create) to create the AKS cluster. Then fill out the form as follows:
--
-|Field |Description |
-|||
-|Compute name | <li>Name is required. Name must be between 2 to 16 characters. </li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li> |
-|Kubernetes Service | Select **Create New** and fill out the rest of the form. Or select **Use existing** and then select an existing AKS cluster from your subscription.
-|Region | Select the region where the cluster will be created |
-|Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
-|Cluster purpose | Select **Production** or **Dev-test** |
-|Number of nodes | The number of nodes multiplied by the virtual machineΓÇÖs number of cores (vCPUs) must be greater than or equal to 12. |
-| Network configuration | Select **Advanced** to create the compute within an existing virtual network. For more information about AKS in a virtual network, see [Network isolation during training and inference with private endpoints and virtual networks](./v1/how-to-secure-inferencing-vnet.md). |
-| Enable SSL configuration | Use this to configure SSL certificate on the compute |
-
-## <a name="attached-compute"></a> Attach other compute
-
-To use compute targets created outside the Azure Machine Learning workspace, you must attach them. Attaching a compute target makes it available to your workspace. Use **Attached compute** to attach a compute target for **training**. Use **Inference clusters** to attach an AKS cluster for **inferencing**.
-
-Use the [steps above](#portal-create) to attach a compute. Then fill out the form as follows:
+1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
+
+1. Under __Manage__, select __Compute__.
-1. Enter a name for the compute target.
-1. Select the type of compute to attach. Not all compute types can be attached from Azure Machine Learning studio. The compute types that can currently be attached for training include:
- * An Azure Virtual Machine (to attach a Data Science Virtual Machine)
- * Azure Databricks (for use in machine learning pipelines)
- * Azure Data Lake Analytics (for use in machine learning pipelines)
- * Azure HDInsight
- * [Kubernetes](./how-to-attach-kubernetes-anywhere.md#attach-a-kubernetes-cluster-to-an-azure-ml-workspace)
+1. In the tabs at the top, select **Attached compute** to attach a compute target for **training**. Or select **Inference clusters** to attach an AKS cluster for **inferencing**.
+1. Select +New, then select the type of compute to attach. Not all compute types can be attached from Azure Machine Learning studio.
1. Fill out the form and provide values for the required properties.
To detach your compute use the following steps:
1. In Azure Machine Learning studio, select __Compute__, __Attached compute__, and the compute you wish to remove. 1. Use the __Detach__ link to detach your compute.
-## <a name="ssh-access"></a> Connect with SSH access
+## Connect with SSH access
-If you created your compute instance or compute cluster with SSH access enabled, use these steps for access.
-
-1. Find the compute in your workspace resources:
- 1. On the left, select **Compute**.
- 1. Use the tabs at the top to select **Compute instance** or **Compute cluster** to find your machine.
-1. Select the compute name in the list of resources.
-1. Find the connection string:
-
- * For a **compute instance**, select **Connect** at the top of the **Details** section.
-
- :::image type="content" source="media/how-to-create-attach-studio/details.png" alt-text="Screenshot: Connect tool at the top of the Details page.":::
-
- * For a **compute cluster**, select **Nodes** at the top, then select the **Connection string** in the table for your node.
- :::image type="content" source="media/how-to-create-attach-studio/compute-nodes.png" alt-text="Screenshot: Connection string for a node in a compute cluster.":::
-
-1. Copy the connection string.
-1. For Windows, open PowerShell or a command prompt:
- 1. Go into the directory or folder where your key is stored
- 1. Add the -i flag to the connection string to locate the private key and point to where it is stored:
-
- `ssh -i <keyname.pem> azureuser@... (rest of connection string)`
-
-1. For Linux users, follow the steps from [Create and use an SSH key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md)
-1. For SCP use:
-
- `scp -i key.pem -P {port} {fileToCopyFromLocal } azureuser@yourComputeInstancePublicIP:~/{destination}`
## Next steps
-After a target is created and attached to your workspace, you use it in your [run configuration](how-to-set-up-training-targets.md) with a `ComputeTarget` object:
--
-```python
-from azureml.core.compute import ComputeTarget
-myvm = ComputeTarget(workspace=ws, name='my-vm-name')
-```
- * Use the compute resource to [submit a training run](how-to-set-up-training-targets.md). * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models. * Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Previously updated : 05/04/2022 Last updated : 08/05/2022 # Create and manage an Azure Machine Learning compute instance
Where the file *create-instance.yml* is:
* Add schedule (preview). Schedule times for the compute instance to automatically start and/or shutdown. See [schedule details](#schedule-automatic-start-and-stop-preview) below. - You can also create a compute instance with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
-## Enable SSH access
+### Enable SSH access
SSH access is disabled by default. SSH access can't be changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md). [!INCLUDE [amlinclude-info](../../includes/machine-learning-enable-ssh.md)]
-Once the compute instance is created and running, see [Connect with SSH access](how-to-create-attach-compute-studio.md#ssh-access).
+### Connect with SSH
+++ ## Create on behalf of (preview)
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-pipelines.md
Once you have a pipeline up and running, you can publish a pipeline so that it r
version="1.0") ```
+4. After you publish your pipeline, you can check it in the UI. Pipeline ID is the unique identified of the published pipeline.
+
+ :::image type="content" source="./media/how-to-deploy-pipelines/published-pipeline-detail.png" alt-text="Screenshot showing published pipeline detail." lightbox= "./media/how-to-deploy-pipelines/published-pipeline-detail.png":::
+ ## Run a published pipeline All published pipelines have a REST endpoint. With the pipeline endpoint, you can trigger a run of the pipeline from any external systems, including non-Python clients. This endpoint enables "managed repeatability" in batch scoring and retraining scenarios.
You can create a Pipeline Endpoint with multiple published pipelines behind it.
```python from azureml.pipeline.core import PipelineEndpoint
-published_pipeline = PublishedPipeline.get(workspace=ws, name="My_Published_Pipeline")
+published_pipeline = PublishedPipeline.get(workspace=ws, id="My_Published_Pipeline_id")
pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name="PipelineEndpointTest", pipeline=published_pipeline, description="Test description Notebook") ```
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
You can schedule a pipeline job yaml in local or an existing pipeline job in wor
- **(Required)** `type` specifies the schedule type is `recurrence`. It can also be `cron`, see details in the next section.
+List continues below.
+ # [Python](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
You can schedule a pipeline job yaml in local or an existing pipeline job in wor
- **(Required)** To provide better coding experience, we use `RecurrenceTrigger` for recurrence schedule.
+List continues below.
+ - **(Required)** `frequency` specifies the unit of time that describes how often the schedule fires. Can be `minute`, `hour`, `day`, `week`, `month`.
The `trigger` section defines the schedule details and contains following proper
- **(Required)** `type` specifies the schedule type is `cron`.
+List continues below.
+ # [Python](#tab/python) [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
The `CronTrigger` section defines the schedule details and contains following pr
- **(Required)** To provide better coding experience, we use `CronTrigger` for recurrence schedule.
+List continues below.
+ - **(Required)** `expression` uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:
You can also apply [Azure CLI JMESPath query](/cli/azure/query-azure-cli) to que
> [!NOTE]
-> Information the user should notice even if skimming
+> Information the user should notice even if skimming
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
* One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+ > [!IMPORTANT]
+ > Compute instance and compute cluster automatically create an NSG with the required rules.
+ >
+ > If you have another NSG at the subnet level, the rules in the subnet level NSG mustn't conflict with the rules in the automatically created NSG.
+ >
+ > To learn how the NSGs filter your network traffic, see [How network security groups filter network traffic](/azure/virtual-network/network-security-group-how-it-works).
+ * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag. * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-trigger-published-pipeline.md
Previously updated : 10/21/2021 Last updated : 08/12/2022 #Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
recurring_schedule = Schedule.create(ws, name="MyRecurringSchedule",
Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a job.
+> [!NOTE]
+> Change-based schedules only supports monitoring Azure Blob storage.
+ To create a file-reactive `Schedule`, you must set the `datastore` parameter in the call to [Schedule.create](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedule#create-workspace--name--pipeline-id--experiment-name--recurrence-none--description-none--pipeline-parameters-none--wait-for-provisioning-false--wait-timeout-3600--datastore-none--polling-interval-5--data-path-parameter-name-none--continue-on-step-failure-none--path-on-datastore-noneworkflow-provider-noneservice-endpoint-none-). To monitor a folder, set the `path_on_datastore` argument. The `polling_interval` argument allows you to specify, in minutes, the frequency at which the datastore is checked for changes.
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
az ml compute create --name cpucluster --type <cluster name> --identity-type sy
# [Portal](#tab/azure-portal)
-For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-studio.md#managed-identity).
+For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
machine-learning How To Use Pipeline Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-ui.md
If your pipeline fails or gets stuck on a node, first view the logs.
The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [View and download diagnostic logs](how-to-log-view-metrics.md#view-and-download-diagnostic-logs).
- :::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
+ ![How to check node logs](media/how-to-use-pipeline-ui/node-logs.gif)
If you don't see those folders, this is due to the compute run time update isn't released to the compute cluster yet, and you can look at **70_driver_log.txt** under **azureml-logs** folder first.
- :::image type="content" source="./media/how-to-use-pipeline-ui/view-driver-logs.png" alt-text="Screenshot showing the driver logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-driver-logs.png":::
## Clone a pipeline job to continue editing
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-web-service.md
There are multiple ways to deploy a model in Azure Machine Learning. One of the
| Compute target | Used for | Description | Creation | | -- | -- | -- | -- |
- |[Azure Kubernetes Service (AKS)](v1/how-to-deploy-azure-kubernetes-service.md) |Real-time inference|Large-scale, production deployments. Fast response time and service autoscaling.| User-created. For more information, see [Create compute targets](how-to-create-attach-compute-studio.md#inference-clusters). |
+ |[Azure Kubernetes Service (AKS)](v1/how-to-deploy-azure-kubernetes-service.md) |Real-time inference|Large-scale, production deployments. Fast response time and service autoscaling.| User-created. For more information, see [Create compute targets](how-to-create-attach-compute-studio.md). |
|[Azure Container Instances](v1/how-to-deploy-azure-container-instance.md)|Testing or development | Small-scale, CPU-based workloads that require less than 48 GB of RAM.| Automatically created by Azure Machine Learning. ### Test the real-time endpoint
There are multiple ways to deploy a model in Azure Machine Learning. One of the
After deployment completes, you can see more details and test your endpoint: 1. Go the **Endpoints** tab.
-1. Select you endpoint.
+1. Select your endpoint.
1. Select the **Test** tab. ![Screenshot showing the Endpoints tab with the Test endpoint button](./media/migrate-rebuild-web-service/test-realtime-endpoint.png)
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM
compute.wait_for_completion(show_output=True) ```
- Or you can attach the DSVM to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#attached-compute).
+ Or you can attach the DSVM to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
> [!WARNING] > Do not create multiple, simultaneous attachments to the same DSVM from your workspace. Each new attachment will break the previous existing attachment(s).
Azure HDInsight is a popular platform for big-data analytics. The platform provi
hdi_compute.wait_for_completion(show_output=True) ```
- Or you can attach the HDInsight cluster to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#attached-compute).
+ Or you can attach the HDInsight cluster to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
> [!WARNING] > Do not create multiple, simultaneous attachments to the same HDInsight from your workspace. Each new attachment will break the previous existing attachment(s).
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-kubernetes.md
--++ Last updated 04/21/2022
az ml computetarget create aks -n myaks
For more information, see the [az ml computetarget create aks](/cli/azure/ml(v1)/computetarget/create#az-ml-computetarget-create-aks) reference.
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
-For information on creating an AKS cluster in the portal, see [Create compute targets in Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#inference-clusters).
+For information on creating an AKS cluster in the portal, see [Create compute targets in Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
az ml computetarget attach aks -n myaks -i aksresourceid -g myresourcegroup -w m
For more information, see the [az ml computetarget attach aks](/cli/azure/ml(v1)/computetarget/attach#az-ml-computetarget-attach-aks) reference.
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
-For information on attaching an AKS cluster in the portal, see [Create compute targets in Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#inference-clusters).
+For information on attaching an AKS cluster in the studio, see [Create compute targets in Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
To detach the existing cluster to your workspace, use the following command. Rep
az ml computetarget detach -n myaks -g myresourcegroup -w myworkspace ```
-# [Portal](#tab/azure-portal)
+# [Studio](#tab/azure-studio)
In Azure Machine Learning studio, select __Compute__, __Inference clusters__, and the cluster you wish to remove. Use the __Detach__ link to detach the cluster.
machine-learning How To Extend Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-extend-prebuilt-docker-image-inference.md
+
+ Title: Extend prebuilt Docker image
+
+description: 'Extend Prebuilt docker images in Azure Machine Learning'
+++++ Last updated : 10/21/2021+++++
+# Extend a prebuilt Docker image
+
+In some cases, the [prebuilt Docker images for model inference](../concept-prebuilt-docker-images-inference.md) and [extensibility](./how-to-prebuilt-docker-images-inference-python-extensibility.md) solutions for Azure Machine Learning may not meet your inference service needs.
+
+In this case, you can use a Dockerfile to create a new image, using one of the prebuilt images as the starting point. By extending from an existing prebuilt Docker image, you can use the Azure Machine Learning network stack and libraries without creating an image from scratch.
+
+**Benefits and tradeoffs**
+
+Using a Dockerfile allows for full customization of the image before deployment. It allows you to have maximum control over what dependencies or environment variables, among other things, are set in the container.
+
+The main tradeoff for this approach is that an extra image build will take place during deployment, which slows down the deployment process. If you can use the [Python package extensibility](./how-to-prebuilt-docker-images-inference-python-extensibility.md) method, deployment will be faster.
+## Prerequisites
+
+* An Azure Machine Learning workspace. For a tutorial on creating a workspace, see [Get started with Azure Machine Learning](../quickstart-create-resources.md).
+* Familiarity with authoring a [Dockerfile](https://docs.docker.com/engine/reference/builder/).
+* Either a local working installation of [Docker](https://www.docker.com/), including the `docker` CLI, **OR** an Azure Container Registry (ACR) associated with your Azure Machine Learning workspace.
+
+ > [!WARNING]
+ > The Azure Container Registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.
+## Create and build Dockerfile
+
+Below is a sample Dockerfile that uses an Azure Machine Learning prebuilt Docker image as a base image:
+
+```Dockerfile
+FROM mcr.microsoft.com/azureml/<image_name>:<tag>
+
+COPY requirements.txt /tmp/requirements.txtΓÇï
+
+RUN pip install ΓÇôr /tmp/requirements.txtΓÇï
+```
+
+Then put the above Dockerfile into the directory with all the necessary files and run the following command to build the image:
+
+```bash
+docker build -f <above dockerfile> -t <image_name>:<tag> .
+```
+
+> [!TIP]
+> More details about `docker build` can be found here in the [Docker documentation](https://docs.docker.com/engine/reference/commandline/build/).
+
+If the `docker build` command isn't available locally, use the Azure Container Registry ACR for your Azure Machine Learning Workspace to build the Docker image in the cloud. For more information, see [Tutorial: Build and deploy container images with Azure Container Registry](/azure/container-registry/container-registry-tutorial-quick-task).
+
+> [!IMPORTANT]
+> Microsoft recommends that you first validate that your Dockerfile works locally before trying to create a custom base image via Azure Container Registry.
+
+The following sections contain more specific details on the Dockerfile.
+
+## Install extra packages
+
+If there are any other `apt` packages that need to be installed in the Ubuntu container, you can add them in the Dockerfile. The following example demonstrates how to use the `apt-get` command from a Dockerfile:
+
+```Dockerfile
+FROM <prebuilt docker image from MCR>
+
+# Switch to root to install apt packages
+USER root:root
+
+RUN apt-get update && \
+ apt-get install -y \
+ <package-1> \
+ ...
+ <package-n> && \
+ apt-get clean -y && \
+ rm -rf /var/lib/apt/lists/*
+
+# Switch back to non-root user
+USER dockeruser
+```
+
+You can also install addition pip packages from a Dockerfile. The following example demonstrates using `pip install`:
+
+```Dockerfile
+RUN pip install <library>
+```
+
+<a id="buildmodel"></a>
+
+## Build model and code into images
+
+If the model and code need to be built into the image, the following environment variables need to be set in the Dockerfile:
+
+* `AZUREML_ENTRY_SCRIPT`: The entry script of your code. This file contains the `init()` and `run()` methods.
+* `AZUREML_MODEL_DIR`: The directory that contains the model file(s). The entry script should use this directory as the root directory of the model.
+
+The following example demonstrates setting these environment variables in the Dockerfile:
+
+```Dockerfile
+FROM <prebuilt docker image from MCR>
+
+# Code
+COPY <local_code_directory> /var/azureml-app
+ENV AZUREML_ENTRY_SCRIPT=<entryscript_file_name>
+
+# Model
+COPY <model_directory> /var/azureml-app/azureml-models
+ENV AZUREML_MODEL_DIR=/var/azureml-app/azureml-models
+```
+
+## Example Dockerfile
+
+The following example demonstrates installing `apt` packages, setting environment variables, and including code and models as part of the Dockerfile:
+
+```Dockerfile
+FROM mcr.microsoft.com/azureml/pytorch-1.6-ubuntu18.04-py37-cpu-inference:latest
+
+USER root:root
+
+# Install libpng-tools and opencv
+RUN apt-get update && \
+ apt-get install -y \
+ libpng-tools \
+ python3-opencv && \
+ apt-get clean -y && \
+ rm -rf /var/lib/apt/lists/*
+
+# Switch back to non-root user
+USER dockeruser
+
+# Code
+COPY code /var/azureml-app
+ENV AZUREML_ENTRY_SCRIPT=score.py
+
+# Model
+COPY model /var/azureml-app/azureml-models
+ENV AZUREML_MODEL_DIR=/var/azureml-app/azureml-models
+```
+
+## Next steps
+
+To use a Dockerfile with the Azure Machine Learning Python SDK, see the following documents:
+
+* [Use your own local Dockerfile](../how-to-use-environments.md#use-your-own-dockerfile)
+* [Use a pre-built Docker image and create a custom base image](../how-to-use-environments.md#use-a-prebuilt-docker-image)
+
+To learn more about deploying a model, see [How to deploy a model](how-to-deploy-and-where.md).
+
+To learn how to troubleshoot prebuilt docker image deployments, see [how to troubleshoot prebuilt Docker image deployments](how-to-troubleshoot-prebuilt-docker-image-inference.md).
machine-learning How To Prebuilt Docker Images Inference Python Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prebuilt-docker-images-inference-python-extensibility.md
+
+ Title: Prebuilt Docker image Python extensibility
+
+description: 'Extend prebuilt docker images with Python package extensibility solution.'
+++++ Last updated : 08/15/2022+++++
+# Python package extensibility for prebuilt Docker images (preview)
++
+The [prebuilt Docker images for model inference](../concept-prebuilt-docker-images-inference.md) contain packages for popular machine learning frameworks. There are two methods that can be used to add Python packages __without rebuilding the Docker image__:
+
+* [Dynamic installation](#dynamic): This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore Python packages when the Docker container boots.
+
+ Consider this method __for rapid prototyping__. When the image starts, packages are restored using the `requirements.txt` file. This method increases startup of the image, and you must wait longer before the deployment can handle requests.
+
+* [Pre-installed Python packages](#preinstalled): You provide a directory containing preinstalled Python packages. During deployment, this directory is mounted into the container for your entry script (`score.py`) to use.
+
+ Use this approach __for production deployments__. Since the directory containing the packages is mounted to the image, it can be used even when your deployments don't have public internet access. For example, when deployed into a secured Azure Virtual Network.
+
+> [!IMPORTANT]
+> Using Python package extensibility for prebuilt Docker images with Azure Machine Learning is currently in preview. Preview functionality is provided "as-is", with no guarantee of support or service level agreement. For more information, see the [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. For a tutorial on creating a workspace, see [Get started with Azure Machine Learning](../quickstart-create-resources.md).
+* Familiarity with using Azure Machine Learning [environments](../how-to-use-environments.md).
+* Familiarity with [Where and how to deploy models](how-to-deploy-and-where.md) with Azure Machine Learning.
+
+<a id="dynamic"></a>
+
+## Dynamic installation
+
+This approach uses a [requirements](https://pip.pypa.io/en/stable/cli/pip_install/#requirements-file-format) file to automatically restore Python packages when the image starts up.
+
+To extend your prebuilt docker container image through a requirements.txt, follow these steps:
+
+1. Create a `requirements.txt` file alongside your `score.py` script.
+2. Add **all** of your required packages to the `requirements.txt` file.
+3. Set the `AZUREML_EXTRA_REQUIREMENTS_TXT` environment variable in your Azure Machine Learning [environment](../how-to-use-environments.md) to the location of `requirements.txt` file.
+
+Once deployed, the packages will automatically be restored for your score script.
+
+> [!TIP]
+> Even while prototyping, we recommend that you pin each package version in `requirements.txt`.
+> For example, use `scipy == 1.2.3` instead of just `scipy` or even `scipy > 1.2.3`.
+> If you don't pin an exact version and `scipy` releases a new version, this can break your scoring script and cause failures during deployment and scaling.
+
+The following example demonstrates setting the `AZUREML_EXTRA_REQUIRMENTS_TXT` environment variable:
+
+```python
+from azureml.core import Environment
+from azureml.core.conda_dependencies import CondaDependencies
+
+myenv = Environment(name="my_azureml_env")
+myenv.docker.enabled = True
+myenv.docker.base_image = <MCR-path>
+myenv.python.user_managed_dependencies = True
+
+myenv.environment_variables = {
+ "AZUREML_EXTRA_REQUIREMENTS_TXT": "requirements.txt"
+}
+```
+
+The following diagram is a visual representation of the dynamic installation process:
++
+<a id="preinstalled"></a>
+
+## Pre-installed Python packages
+
+This approach mounts a directory that you provide into the image. The Python packages from this directory can then be used by the entry script (`score.py`).
+
+To extend your prebuilt docker container image through pre-installed Python packages, follow these steps:
+
+> [!IMPORTANT]
+> You must use packages compatible with Python 3.7. All current images are pinned to Python 3.7.
+
+1. Create a virtual environment using [virtualenv](https://virtualenv.pypa.io/).
+
+1. Install your Dependencies. If you have a list of dependencies in a `requirements.txt`, for example, you can use that to install with `pip install -r requirements.txt` or just `pip install` individual dependencies.
+
+1. When you specify the `AZUREML_EXTRA_PYTHON_LIB_PATH` environment variable, make sure that you point to the correct site packages directory, which will vary depending on your environment name and Python version. The following code demonstrates setting the path for a virtual environment named `myenv` and Python 3.7:
++
+ ```python
+ from azureml.core import Environment
+ from azureml.core.conda_dependencies import CondaDependencies
+
+ myenv = Environment(name='my_azureml_env')
+ myenv.docker.enabled = True
+ myenv.docker.base_image = <MCR-path>
+ myenv.python.user_managed_dependencies = True
+
+ myenv.environment_variables = {
+ "AZUREML_EXTRA_PYTHON_LIB_PATH": "myenv/lib/python3.7/site-packages"
+ }
+ ```
+
+The following diagram is a visual representation of the pre-installed packages process:
++
+### Common problems
+
+The mounting solution will only work when your `myenv` site packages directory contains all of your dependencies. If your local environment is using dependencies installed in a different location, they won't be available in the image.
+
+Here are some things that may cause this problem:
+
+* `virtualenv` creates an isolated environment by default. Once you activate the virtual environment, __global dependencies cannot be used__.
+* If you have a `PYTHONPATH` environment variable pointing to your global dependencies, __it may interfere with your virtual environment__. Run `pip list` and `pip freeze` after activating your environment to make sure no unwanted dependencies are in your environment.
+* __Conda and `virtualenv` environments can interfere__. Make sure that not to use [Conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) and `virtualenv` at the same time.
+
+## Limitations
+
+### Model.package()
+
+* The [Model.package()](/python/api/azureml-core/azureml.core.model(class)) method lets you create a model package in the form of a Docker image or Dockerfile build context. Using Model.package() with prebuilt inference docker images triggers an intermediate image build that changes the non-root user to root user.
+
+* We encourage you to use our Python package extensibility solutions. If other dependencies are required (such as `apt` packages), create your own [Dockerfile extending from the inference image](how-to-extend-prebuilt-docker-image-inference.md#buildmodel).
+
+## Frequently asked questions
+
+* In the requirements.txt extensibility approach is it mandatory for the file name to be `requirements.txt`?
+
+
+ ```python
+ myenv.environment_variables = {
+ "AZUREML_EXTRA_REQUIREMENTS_TXT": "name of your pip requirements file goes here"
+ }
+ ```
+
+* Can you summarize the `requirements.txt` approach versus the *mounting approach*?
+
+ Start prototyping with the *requirements.txt* approach.
+ After some iteration, when you're confident about which packages (and versions) you need for a successful model deployment, switch to the *Mounting Solution*.
+
+ Here's a detailed comparison.
+
+ | Compared item | Requirements.txt (dynamic installation) | Package Mount |
+ | -- | -- | |
+ | Solution | Create a `requirements.txt` that installs the specified packages when the container starts. | Create a local Python environment with all of the dependencies. Mount this directory into container at runtime. |
+ | Package Installation | No extra installation (assuming pip already installed) | Virtual environment or conda environment installation. |
+ | Virtual environment Setup | No extra setup of virtual environment required, as users can pull the current local user environment with pip freeze as needed to create the `requirements.txt`. | Need to set up a clean virtual environment, may take extra steps depending on the current user local environment. |
+ | [Debugging](../how-to-inference-server-http.md) | Easy to set up and debug server, since dependencies are clearly listed. | Unclean virtual environment could cause problems when debugging of server. For example, it may not be clear if errors come from the environment or user code. |
+ | Consistency during scaling out | Not consistent as dependent on external PyPi packages and users pinning their dependencies. These external downloads could be flaky. | Relies solely on user environment, so no consistency issues. |
+
+* Why are my `requirements.txt` and mounted dependencies directory not found in the container?
+
+ Locally, verify the environment variables are set properly. Next, verify the paths that are specified are spelled properly and exist.
+ Check if you have set your source directory correctly in the [inference config](/python/api/azureml-core/azureml.core.model.inferenceconfig#constructor) constructor.
+
+* Can I override Python package dependencies in prebuilt inference docker image?
+
+ Yes. If you want to use other version of Python package that is already installed in an inference image, our extensibility solution will respect your version. Make sure there are no conflicts between the two versions.
+
+## Best Practices
+
+* Refer to the [Load registered model](how-to-deploy-advanced-entry-script.md#load-registered-models) docs. When you register a model directory, don't include your scoring script, your mounted dependencies directory, or `requirements.txt` within that directory.
++
+* For more information on how to load a registered or local model, see [Where and how to deploy](how-to-deploy-and-where.md?tabs=azcli#define-a-dummy-entry-script).
+
+## Bug Fixes
+
+### 2021-07-26
+
+* `AZUREML_EXTRA_REQUIREMENTS_TXT` and `AZUREML_EXTRA_PYTHON_LIB_PATH` are now always relative to the directory of the score script.
+For example, if both the requirements.txt and score script is in **my_folder**, then `AZUREML_EXTRA_REQUIREMENTS_TXT` will need to be set to requirements.txt. No longer will `AZUREML_EXTRA_REQUIREMENTS_TXT` be set to **my_folder/requirements.txt**.
+
+## Next steps
+
+To learn more about deploying a model, see [How to deploy a model](how-to-deploy-and-where.md).
+
+To learn how to troubleshoot prebuilt docker image deployments, see [how to troubleshoot prebuilt Docker image deployments](how-to-troubleshoot-prebuilt-docker-image-inference.md).
machine-learning How To Troubleshoot Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-prebuilt-docker-image-inference.md
+
+ Title: Troubleshoot prebuilt docker images
+
+description: 'Troubleshooting steps for using prebuilt Docker images for inference.'
+++++ Last updated : 08/15/2022++++
+# Troubleshooting prebuilt docker images for inference
+
+Learn how to troubleshoot problems you may see when using prebuilt docker images for inference with Azure Machine Learning.
+
+> [!IMPORTANT]
+> Using [Python package extensibility for prebuilt Docker images](how-to-prebuilt-docker-images-inference-python-extensibility.md) with Azure Machine Learning is currently in preview. Preview functionality is provided "as-is", with no guarantee of support or service level agreement. For more information, see the [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Model deployment failed
+
+If model deployment fails, you won't see logs in [Azure Machine Learning studio](https://ml.azure.com/) and `service.get_logs()` will return None.
+If there is a problem in the init() function of score.py, `service.get_logs()` will return logs for the same.
+
+So you'll need to run the container locally using one of the commands shown below and replace `<MCR-path>` with an image path. For a list of the images and paths, see [Prebuilt Docker images for inference](../concept-prebuilt-docker-images-inference.md).
+
+### Mounting extensibility solution
+
+Go to the directory containing `score.py` and run:
+
+```bash
+docker run -it -v $(pwd):/var/azureml-app -e AZUREML_EXTRA_PYTHON_LIB_PATH="myenv/lib/python3.7/site-packages" <mcr-path>
+```
+
+### requirements.txt extensibility solution
+
+Go to the directory containing `score.py` and run:
+
+```bash
+docker run -it -v $(pwd):/var/azureml-app -e AZUREML_EXTRA_REQUIREMENTS_TXT="requirements.txt" <mcr-path>
+```
+
+## Enable local debugging
+
+The local inference server allows you to quickly debug your entry script (`score.py`). In case the underlying score script has a bug, the server will fail to initialize or serve the model. Instead, it will throw an exception & the location where the issues occurred. [Learn more about Azure Machine Learning inference HTTP Server](../how-to-inference-server-http.md)
+
+## For common model deployment issues
+
+For problems when deploying a model from Azure Machine Learning to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS), see [Troubleshoot model deployment](how-to-troubleshoot-deployment.md).
+
+## init() or run() failing to write a file
+
+HTTP server in our Prebuilt Docker Images run as *non-root user*, it may not have access right to all directories.
+Only write to directories you have access rights to. For example, the `/tmp` directory in the container.
+
+## Extra Python packages not installed
+
+* Check if there's a typo in the environment variable or file name.
+* Check the container log to see if `pip install -r <your_requirements.txt>` is installed or not.
+* Check if source directory is set correctly in the [inference config](/python/api/azureml-core/azureml.core.model.inferenceconfig#constructor) constructor.
+* If installation not found and log says "file not found", check if the file name shown in the log is correct.
+* If installation started but failed or timed out, try to install the same `requirements.txt` locally with same Python and pip version in clean environment (that is, no cache directory; `pip install --no-cache-dir -r requriements.txt`). See if the problem can be reproduced locally.
+
+## Mounting solution failed
+
+* Check if there's a typo in the environment variable or directory name.
+* The environment variable must be set to the relative path of the `score.py` file.
+* Check if source directory is set correctly in the [inference config](/python/api/azureml-core/azureml.core.model.inferenceconfig#constructor) constructor.
+* The directory needs to be the "site-packages" directory of the environment.
+* If `score.py` still returns `ModuleNotFound` and the module is supposed to be in the directory mounted, try to print the `sys.path` in `init()` or `run()` to see if any path is missing.
+
+## Building an image based on the prebuilt Docker image failed
+
+* If failed during apt package installation, check if the user has been set to root before running the apt command? (Make sure switch back to non-root user)
+
+## Run doesn't complete on GPU local deployment
+
+GPU base images can't be used for local deployment, unless the local deployment is on an Azure Machine Learning compute instance. GPU base images are supported only on Microsoft Azure Services such as Azure Machine Learning compute clusters and instances, Azure Container Instance (ACI), Azure VMs, or Azure Kubernetes Service (AKS).
+
+## Image built based on the prebuilt Docker image can't boot up
+
+* The non-root user needs to be `dockeruser`. Otherwise, the owner of the following directories must be set to the user name you want to use when running the image:
+
+ ```bash
+ /var/runit
+ /var/log
+ /var/lib/nginx
+ /run
+ /opt/miniconda
+ /var/azureml-app
+ ```
+
+* If the `ENTRYPOINT` has been changed in the new built image, then the HTTP server and related components need to be loaded by `runsvdir /var/runit`
+
+## Next steps
+
+* [Add Python packages to prebuilt images](how-to-prebuilt-docker-images-inference-python-extensibility.md).
+* [Use a prebuilt package as a base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md).
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
az ml computetarget create amlcompute --name <cluster name> -w <workspace> -g <r
# [Portal](#tab/azure-portal)
-For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](../how-to-create-attach-compute-studio.md#managed-identity).
+For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](../how-to-create-attach-compute-cluster.md#set-up-managed-identity).
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/usage-dashboard.md
Previously updated : 05/11/2022 Last updated : 08/11/2022 # Usage dashboard in commercial marketplace analytics
_**Table 1: Dictionary of data terms**_
| Action Taken By | Action Taken By | **Applicable for offers with custom meter dimensions**.<br>Specifies the person who acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | ActionTakenBy | | Estimated Financial Impact (USD) | Estimated Financial Impact in USD | **Applicable for offers with custom meter dimensions**.<br>When Partner Center flags an overage usage by the customer for the offerΓÇÖs custom meter dimension as anomalous, the field specifies the estimated financial impact (in USD) of the anomalous overage usage.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic means, then the value will be null._ | EstimatedFinancialImpactUSD | | Asset Id | Asset Id | **Applicable for offers with custom meter dimensions**.<br>The unique identifier of the customer's order subscription for your commercial marketplace service. Virtual machine usage-based offers are not associated with an order. | Asset Id |
-| N/A | Resource Id | The fully qualified ID of the resource, including the resource name and resource type. Note that this is a data field available in download reports only.<br>Use the format:<br> /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name}<br>**Note**: This field will be deprecated on 10/20/2021. | N/A |
| PlanId | PlanID | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a number. | PlanID |
-|||||
-
+| Not available | Reference Id | A key to link transactions of usage-based offers with corresponding transactions in the orders report. For SaaS offers with custom meters, this key represents the AssetId. For VM software reservations, this key can be used for linking orders and usage reports. | ReferenceId |
## Next steps
migrate Troubleshoot Changed Block Tracking Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-changed-block-tracking-replication.md
The possible causes include:
1. Select Snapshot Manager in namespace. Right-click on Snapshot Manager, select **Receive Messages** > **peek**, and select OK.
- If the connection is successful, you will see "[x] messages received" on the console output. If the connection is not successful, you'll see a message stating that the connection failed.
+ If the connection is successful, you'll see "[x] messages received" on the console output. If the connection isn't successful, you'll see a message stating that the connection failed.
**Resolution:** If this test fails, there's a connectivity issue between the Azure Migrate appliance and Service Bus. Engage your local networking team to check these connectivity issues. Typically, there can be some firewall settings that are causing the failures.
Alternatively, you can reset VMware changed block tracking on a virtual machine
## An internal error occurred
-Sometimes you may hit an error that occurs due to issues in the VMware environment/API. We have identified the following set of errors as VMware environment-related errors. These errors have a fixed format.
+Sometimes you may hit an error that occurs due to issues in the VMware environment/API. We've identified the following set of errors as VMware environment-related errors. These errors have a fixed format.
_Error Message: An internal error occurred. [Error message]_
This error occurs when the size of the snapshot file created is larger than the
- Restore the included disks to the original path using storage vMotion and try replication again.
+## Host Connection Refused
+
+**Error ID:** 1022
+
+**Error Message:** The Azure Migrate appliance is unable to connect to the vSphere host '%HostName;'
+
+**Possible Causes:**
+
+This may happen if:
+1. The Azure Migrate appliance is unable to resolve the hostname of the vSphere host.
+2. The Azure Migrate appliance is unable to connect to the vSphere host on port 902 (default port used by VMware vSphere Virtual Disk Development Kit), because TCP port 902 is being blocked on the vSphere host or by a network firewall.
+
+**Recommendations:**
+
+**Ensure that the hostname of the vSphere host is resolvable from the Azure Migrate appliance.**
+- Sign in to the Azure Migrate appliance and open PowerShell.
+- Perform an `nslookup` on the hostname and verify if the address is being resolved: `nslookup '%HostName;' `.
+- If the host name isn't getting resolved, ensure that the DNS resolution of the vSphere hostnames can be performed from the Azure Migrate appliance. Alternatively, add a static host entry for each vSphere host to the hosts file(C:\Windows\System32\drivers\etc\hosts) on the appliance.
+
+**Ensure the vSphere host is accepting connections on port 902 and that the endpoint is reachable from the appliance.**
+- Sign in to the Azure Migrate appliance and open PowerShell.
+- Use the `Test-NetConnection` cmdlet to validate connectivity: `Test-NetConnection '%HostName;' -Port 902`.
+- If the tcp test doesn't succeed, the connection is being blocked by a firewall or isn't being accepted by the vSphere host. Resolve the network issues to allow replication to proceed.
++ ## Next Steps Continue VM replication, and perform [test migration](./tutorial-migrate-vmware.md#run-a-test-migration).
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Previously updated : 08/12/2022 Last updated : 08/17/2022 # Azure Policy Regulatory Compliance controls for Azure Database for MySQL
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 08/12/2022 Last updated : 08/17/2022 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
purview Create Azure Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-dotnet.md
Next, create a C# .NET console application in Visual Studio:
Install-Package Microsoft.Azure.Management.ResourceManager -IncludePrerelease Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory ```
+>[!TIP]
+> If you are getting an error that reads: **Package \<package name> is not found in the following primary sources(s):** and it is listing a local folder, you need to update your package sources in Visual Studio to include the nuget site as an online source.
+> 1. Go to **Tools**
+> 1. Select **NuGet Package Manager**
+> 1. Select **Package Manage Settings**
+> 1. Select **Package Sources**
+> 1. Add https://nuget.org/api/v2/ as a source.
## Create a Microsoft Purview client
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
Recommended: Keep it to "Yes, use default Azure DNS". Choosing Yes, the deployer
- As a final validation step to ensure successful installation and for checking whether Profisee has been successfully connected to your Microsoft Purview instance, go to **/Profisee/api/governance/health** It should look something like - "https://[profisee_name].[region].cloudapp.azure.com//Profisee/api/governance/health". The output response will indicate the words **"Status": "Healthy"** on all the Purview subsystems.
-```{
+```json
+{
"OverallStatus": "Healthy", "TotalCheckDuration": "0:XXXXXXX", "DependencyHealthChecks": {
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Previously updated : 06/13/2022 Last updated : 08/11/2022 # Integrate Azure App Configuration with Service Connector
This page shows the supported authentication types and client types of Azure App
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |-|::|::|::|::| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
++ ## Default environment variable names or application properties
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
Title: Integrate Apache kafka on Confluent Cloud with Service Connector description: Integrate Apache kafka on Confluent Cloud into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Apache Kafka on Confluent Cloud with Service Connector
This page shows the supported authentication types and client types of Apache ka
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported Authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|-|--|--|-| | .NET | | | ![yes icon](./media/green-check.png) | |
+| Go | | | ![yes icon](./media/green-check.png) | |
| Java | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | | | ![yes icon](./media/green-check.png) | | | Python | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Go | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Title: Integrate Azure Cosmos DB with Service Connector
-description: Integrate Azure Cosmos DB into your application with Service Connector
--
+ Title: Integrate the Azure Cosmos DB Mongo API with Service Connector
+description: Integrate the Azure Cosmos DB Mongo API into your application with Service Connector
++ + Last updated : 08/11/2022 - Previously updated : 06/13/2022
-# Integrate Azure Cosmos DB with Service Connector
+# Integrate the Azure Cosmos DB Mondo API with Service Connector
-This page shows the supported authentication types and client types of Azure Cosmos DB using Service Connector. You might still be able to connect to Azure Cosmos DB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB Mongo API using Service Connector. You might still be able to connect to Azure Cosmos DB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
-## Supported compute service
+## Supported compute services
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Cos
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+++ ## Default environment variable names or application properties Use the connection details below to connect compute services to Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<mongo-db-server>`, `<subscription-ID>`, `<resource-group-name>`, `<database-server>`, `<client-secret>`, and `<tenant-id>` with your Mongo DB Admin username, password, Mongo DB server, subscription ID, resource group name, database server, client secret and tenant ID.
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
description: Integrate Azure Event Hubs into your application with Service Conne
- Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Event Hubs with Service Connector
This page shows the supported authentication types and client types of Azure Eve
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | :-: | :--:| :--:| :--:|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Kafka - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Kafka - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||::|::|::|::|
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Kafka - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
++ ## Default environment variable names or application properties
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
Title: Integrate Azure Key Vault with Service Connector description: Integrate Azure Key Vault into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Key Vault with Service Connector
This page shows the supported authentication types and client types of Azure Key
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|-|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Key
| Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|-|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|-|--|
+| Java | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | | ![yes icon](./media/green-check.png) |
++ ## Default environment variable names or application properties
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
Title: Integrate Azure Database for MySQL with Service Connector description: Integrate Azure Database for MySQL into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Database for MySQL with Service Connector
This page shows the supported authentication types and client types of Azure Dat
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | ||-|--|--|-| | .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
+| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
| Java (JDBC) | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | | | Node.js (mysql) | | | ![yes icon](./media/green-check.png) | | | Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | | | Python-Django | | | ![yes icon](./media/green-check.png) | |
+| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | |
+| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||-|--|--|-|
+| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
+| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
+| Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | |
+| Python-Django | | | ![yes icon](./media/green-check.png) | |
| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | | | Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||-|--|--|-|
+| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
Title: Integrate Azure Database for PostgreSQL with Service Connector description: Integrate Azure Database for PostgreSQL into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Database for PostgreSQL with Service Connector
This page shows the supported authentication types and client types of Azure Dat
- Azure App Service - Azure App Configuration-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | ||-|--|--|-| | .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
+| Go (pg) | | | ![yes icon](./media/green-check.png) | |
| Java (JDBC) | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | | | Node.js (pg) | | | ![yes icon](./media/green-check.png) | | | Python (psycopg2) | | | ![yes icon](./media/green-check.png) | | | Python-Django | | | ![yes icon](./media/green-check.png) | |
+| PHP (native) | | | ![yes icon](./media/green-check.png) | |
+| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||-|--|--|-|
+| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
| Go (pg) | | | ![yes icon](./media/green-check.png) | |
+| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
+| Python (psycopg2) | | | ![yes icon](./media/green-check.png) | |
+| Python-Django | | | ![yes icon](./media/green-check.png) | |
| PHP (native) | | | ![yes icon](./media/green-check.png) | | | Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||-|--|--|-|
+| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
Title: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise with Service Connector description: Integrate Azure Cache for Redis and Azure Cache Redis Enterprise into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Cache for Redis with Service Connector
This page shows the supported authentication types and client types of Azure Cac
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported Authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|-|--|--|-| | .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
+| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
| Java (Jedis) | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | | | Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | | | Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
+| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
+| Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | |
+| Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
Previously updated : 06/13/2022 Last updated : 08/11/2022 # Integrate Service Bus with Service Connector
This page shows the supported authentication types and client types of Azure Ser
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |--|::|::|::|::| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Ser
| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|::|::|::|::|
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
++ ## Default environment variable names or application properties
service-connector How To Integrate Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-signalr.md
Title: Integrate Azure SignalR Service with Service Connector description: Integrate Azure SignalR Service into your application with Service Connector. Learn about authentication types and client types of Azure SignalR Service.--++ Previously updated : 6/13/2022 Last updated : 08/11/2022 - ignite-fall-2021 - kr2b-contr-experiment
This article shows the supported authentication types and client types of Azure
## Supported authentication types and client types
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+Supported authentication and clients for App Service and Container Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
|-|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
++ ## Default environment variable names or application properties
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
Previously updated : 06/13/2022 Last updated : 08/11/2022 # Integrate Azure SQL Database with Service Connector
This page shows all the supported compute services, clients, and authentication
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and clients
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |--|:--:|::|::|:--:| | .NET | | | ![yes icon](./media/green-check.png) | |
This page shows all the supported compute services, clients, and authentication
| Python | | | ![yes icon](./media/green-check.png) | | | Python - Django | | | ![yes icon](./media/green-check.png) | | | Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|:--:|::|::|:--:|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Go | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| PHP | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| Python - Django | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|:--:|::|::|:--:|
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
This page shows the supported authentication types and client types of Azure Blo
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Blo
| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
Title: Integrate Azure Files with Service Connector description: Integrate Azure Files into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Files with Service Connector
This page shows the supported authentication types and client types of Azure Fil
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|-|--|--|-| | .NET | | | ![yes icon](./media/green-check.png) | |
This page shows the supported authentication types and client types of Azure Fil
| Python | | | ![yes icon](./media/green-check.png) | | | PHP | | | ![yes icon](./media/green-check.png) | | | Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Container Apps](#tab/container-apps)
+
+| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| PHP | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
++ ## Default environment variable names or application properties
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
Title: Integrate Azure Queue Storage with Service Connector description: Integrate Azure Queue Storage into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Queue Storage with Service Connector
This page shows the supported authentication types and client types of Azure Que
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
## Supported authentication types and client types
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+ | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
This page shows the supported authentication types and client types of Azure Que
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+### [Azure Container Apps](#tab/container-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
+++ ## Default environment variable names or application properties Use the connection details below to connect compute services to Queue Storage. For each example below, replace the placeholder texts
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
Title: Integrate Azure Table Storage with Service Connector description: Integrate Azure Table Storage into your application with Service Connector--++ - Previously updated : 06/13/2022 Last updated : 08/11/2022+ # Integrate Azure Table Storage with Service Connector
This page shows the supported authentication types and client types of Azure Tab
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
+
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
-## Supported authentication types and client types
+### [Azure Container Apps](#tab/container-apps)
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |-|-|--|--|-|
This page shows the supported authentication types and client types of Azure Tab
| Node.js | | | ![yes icon](./media/green-check.png) | | | Python | | | ![yes icon](./media/green-check.png) | |
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|-|-|--|--|-|
+| Java | | | ![yes icon](./media/green-check.png) | |
+++ ## Default environment variable names or application properties Use the connection details below to connect compute services to Azure Table Storage. For each example below, replace the placeholder texts `<account-name>` and `<account-key>` with your own account name and account key.
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
Previously updated : 06/16/2022 Last updated : 08/11/2022 # Integrate Azure Web PubSub with service connector
This page shows all the supported compute services, clients, and authentication
- Azure App Service - Azure Container Apps-- Azure Spring Cloud
+- Azure Spring Apps
-## Supported authentication types and clients
+Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
+
+### [Azure App Service](#tab/app-service)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+
+### [Azure Container Apps](#tab/container-apps)
| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |-|::|::|::|::|
This page shows all the supported compute services, clients, and authentication
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+### [Azure Spring Apps](#tab/spring-apps)
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+++ ## Default environment variable names or application properties Use the environment variable names and application properties listed below to connect an Azure service to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your own resource name, client ID, client secret, access-key, and tenant ID.
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
Previously updated : 05/24/2022 Last updated : 08/09/2022 ms.devlang: azurecli # Quickstart: Create a service connection in Container Apps with the Azure CLI
-This quickstart shows you how to create a service connection in Container Apps with the Azure CLI. The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
+This quickstart shows you how to connect Azure Container Apps to other Cloud resources using the Azure CLI and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings.
+> [!IMPORTANT]
+> Service Connector in Container Apps is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+## Prerequisites
-- Version 2.37.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- At least one application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [create and deploy a container to Container Apps](../container-apps/quickstart-portal.md).
-- An application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](../container-apps/quickstart-portal.md).
-> [!IMPORTANT]
-> Service Connector in Container Apps is currently in preview.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+- Version 2.37.0 or higher of the Azure CLI must be installed. To upgrade to the latest version, run `az upgrade`. If using Azure Cloud Shell, the latest version is already installed.
-## View supported target services
+- The Container Apps extension must be installed in the Azure CLI or the Cloud Shell. To install it, run `az extension add --name containerapp`.
-Use the following Azure CLI command to create and manage service connections from Container Apps.
+## Prepare to create a connection
-```azurecli-interactive
-az provider register -n Microsoft.ServiceLinker
-az containerapp connection list-support-types --output table
-```
+1. Run the command [az provider register](/cli/azure/provider#az-provider-register) to start using Service Connector.
+
+ ```azurecli-interactive
+ az provider register -n Microsoft.ServiceLinker
+ ```
+
+1. Run the command `az containerapp connection` to get a list of supported target services for Container Apps.
+
+ ```azurecli-interactive
+ az containerapp connection list-support-types --output table
+ ```
## Create a service connection
-### [Using an access key](#tab/using-access-key)
+You can create a connection using an access key or a managed identity.
+
+### [Access key](#tab/using-access-key)
-1. Use the following Azure CLI command to create a service connection from Container Apps to a Blob Storage with an access key.
+1. Run the `az containerapp connection create` command to create a service connection between Container Apps and Azure Blob Storage with an access key.
```azurecli-interactive az containerapp connection create storage-blob --secret
az containerapp connection list-support-types --output table
1. Provide the following information at the Azure CLI's request:
- - **The resource group which contains the container app**: the name of the resource group with the container app.
- - **Name of the container app**: the name of your container app.
- - **The container where the connection information will be saved:** the name of the container, in your container app, that connects to the target service
- - **The resource group which contains the storage account:** the name of the resource group name with the storage account. In this guide, we're using a Blob Storage.
- - **Name of the storage account:** the name of the storage account that contains your blob.
+ | Setting | Description |
+ |-|-|
+ | `The resource group that contains the container app` | The name of the resource group with the container app. |
+ | `Name of the container app` | The name of the container app. |
+ | `The container where the connection information will be saved` | The name of the container app's container. |
+ | `The resource group which contains the storage account` | The name of the resource group with the storage account. |
+ | `Name of the storage account` | The name of the storage account you want to connect to. In this guide, we're using a Blob Storage. |
-> [!NOTE]
-> If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --secret` to provision a new Blob Storage and directly get connected to your app service.
+> [!TIP]
+> If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --secret` to provision a new Blob Storage and directly connect it to your container app using a connection string.
-### [Using a managed identity](#tab/using-managed-identity)
+### [Managed identity](#tab/using-managed-identity)
> [!IMPORTANT]
-> Using a managed identity requires you have the permission to [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Without this permission, your connection creation will fail. Ask your subscription owner to grant you this permission, or use an access key instead to create the connection.
+> To use a managed identity, you must have the permission to modify [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Without this permission, your connection creation will fail. Ask your subscription owner to grant you this permission, or use an access key instead to create the connection.
-1. Use the following Azure CLI command to create a service connection from Container Apps to a Blob Storage with a system-assigned managed identity.
+1. Run the `az containerapp connection create` command to create a service connection from Container Apps to a Blob Storage with a system-assigned managed identity.
```azurecli-interactive az containerapp connection create storage-blob --system-identity
az containerapp connection list-support-types --output table
1. Provide the following information at the Azure CLI's request:
- - **The resource group which contains the container app**: the name of the resource group with the container app.
- - **Name of the container app**: the name of your container app.
- - **The container where the connection information will be saved:** the name of the container, in your container app, that connects to the target service
- - **The resource group which contains the storage account:** the name of the resource group name with the storage account. In this guide, we're using a Blob Storage.
- - **Name of the storage account:** the name of the storage account that contains your blob.
+ | Setting | Description |
+ |-|-|
+ | `The resource group that contains the container app` | The name of the resource group with the container app. |
+ | `Name of the container app` | The name of the container app. |
+ | `The container where the connection information will be saved` | The name of the container app's container. |
+ | `The resource group which contains the storage account` | The name of the resource group with the storage account. |
+ | `Name of the storage account` | The name of the storage account you want to connect to. In this guide, we're using a Blob Storage. |
> [!NOTE]
-> If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --system-identity` to provision a new Blob Storage and directly get connected to your app service.
+> If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --system-identity` to provision a new Blob Storage and directly connect it to your container app using a managed identity.
## View connections
- Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Provide the following information:
--- **Source compute service resource group name:** the resource group name of the container app.-- **Container app name:** the name of your container app.
+ Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Replace the placeholders `<container-app-resource-group>` and `<container-app-name>` from the command below with the resource group and name of your container app. You can also remove the `--output table` option to view more information about your connections.
```azurecli-interactive
-az containerapp connection list -g "<your-container-app-resource-group>" --name "<your-container-app-name>" --output table
+az containerapp connection list -g "<container-app-resource-group>" --name "<container-app-name>" --output table
``` The output also displays the provisioning state of your connections: failed or succeeded.
service-connector Quickstart Portal Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-container-apps.md
Previously updated : 07/22/2022
-#Customer intent: As an app developer, I want to connect a containerized app to a storage account in the Azure portal using Service Connector.
Last updated : 08/09/2022+
+#Customer intent: As an app developer, I want to connect a Container App to a storage account in the Azure portal using Service Connector.
-# Quickstart: Create a service connection in Container Apps from the Azure portal
+# Quickstart: Create a service connection in Azure Container Apps from the Azure portal
-Get started with Service Connector by using the Azure portal to create a new service connection in Azure Container Apps.
+This quickstart shows you how to connect Azure Container Apps to other Cloud resources using the Azure portal and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings.
> [!IMPORTANT] > This feature in Container Apps is currently in preview.
Get started with Service Connector by using the Azure portal to create a new ser
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- An application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](../container-apps/quickstart-portal.md).
+- An [app deployed to Container Apps](../container-apps/quickstart-portal.md) in a [region supported by Service Connector](./concept-region-support.md).
+- A target resource to connect your Container Apps to. For example, a [storage account](/azure/storage/common/storage-account-create).
## Sign in to Azure Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
-## Create a new service connection in Container Apps
+## Create a new service connection
You'll use Service Connector to create a new service connection in Container Apps.
-1. Select the **All resources** button on the left of the Azure portal. Type **Container Apps** in the filter and select the name of the container app you want to use in the list.
-2. Select **Service Connector** from the left table of contents. Then select **Create**.
-3. Select or enter the following settings.
+1. To create a new service connection in Container Apps, select the **Search resources, services and docs (G +/)** search bar at the top of the Azure portal, type *Container Apps* in the filter and select **Container Apps**.
+
+ :::image type="content" source="./media/container-apps-quickstart/select-container-apps.png" alt-text="Screenshot of the Azure portal, selecting Container Apps.":::
+
+1. Select the name of the Container Apps resource you want to connect to a target resource.
+
+1. Select **Service Connector (preview)** from the left table of contents. Then select **Create**.
+
+ :::image type="content" source="./media/container-apps-quickstart/select-service-connector.png" alt-text="Screenshot of the Azure portal, selecting Service Connector and creating new connection.":::
+
+1. Select or enter the following settings.
+
+ | Setting | Example | Description |
+ ||-||
+ | **Container** | *my-container* | The container of your container app. |
+ | **Service type** | *Storage - Blob* | The type of service you're going to connect to your container. |
+ | **Subscription** | *my-subscription* | The subscription that contains your target service (the service you want to connect to). The default value is the subscription that this container app is in. |
+ | **Connection name** | *storageblob_700ae* | The connection name that identifies the connection between your container app and target service. Use the connection name provided by Service Connector or choose your own connection name. |
+ | **Storage account** | *my-storage-account* | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
+ | **Client type** | *.NET* | The application stack that works with the target service you selected. The default value is None, which will generate a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
+
+ :::image type="content" source="./media/container-apps-quickstart/basics.png" alt-text="Screenshot of the Azure portal, filling out the Basics tab.":::
+
+1. Select **Next: Authentication** to choose an authentication method: system-assigned managed identity (SMI), user-assigned managed identity (UMI), connection string, or service principal.
+
+ ### [SMI](#tab/SMI)
+
+ System-assigned managed identity is the recommended authentication option. Select **System-assigned managed identity** to connect through an identity that's automatically generated in Azure Active Directory and tied to the lifecycle of the service instance.
+
+ ### [UMI](#tab/UMI)
+
+ Select **User-assigned managed identity** to authenticate through a standalone identity assigned to one or more instances of an Azure service. Select a subscription that contains a user-assigned managed identity, and select the identity.
+
+ If you don't have one yet, create a user-assigned managed identity:
+
+ 1. Open the Azure platform in a new tab and search for **Managed identities**.
+ 1. Select **Managed identities** and select **Create**
+ 1. Enter a subscription, resource group, region and instance name
+ 1. Select **Review + create** and the **Create**
+ 1. Once your managed identity has been deployed, go to your Service Connector tab, select **Previous** and then **Next** to refresh the form's data, and under **User-assigned managed identity**, select the identity you've created.
+
+ For more information, go to [create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp).
+
+ ### [Connection string](#tab/CS)
+
+ Select **Connection string** to generate or configure one or multiple key-value pairs with pure secrets or tokens.
+
+ ### [Service principal](#tab/SP)
+
+ 1. Select **Service principal** to use a service principal that defines the access policy and permissions for the user/application in Azure Active Directory.
+ 1. Select a service principal from the list and enter a **secret**
+
+
+
+1. Select **Next: Networking** to select the network configuration and select **Configure firewall rules to enable access to target service** so that your container can reach the Blob Storage.
+
+ :::image type="content" source="./media/container-apps-quickstart/networking.png" alt-text="Screenshot of the Azure portal, connection networking set-up.":::
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Container** | Your container | Select your Container Apps. |
- | **Service type** | Blob Storage | Target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
- | **Subscription** | One of your subscriptions | The subscription where your target service (the service you want to connect to) is located. The default value is the subscription that this container app is in. |
- | **Connection name** | Generated unique name | The connection name that identifies the connection between your container app and target service |
- | **Storage account** | Your storage account | The target storage account you want to connect to. If you choose a different service type, select the corresponding target service instance. |
- | **Client type** | The app stack in your selected container | Your application stack that works with the target service you selected. The default value is **none**, which will generate a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
+1. Select **Next: Review + Create** to review the provided information. Running the final validation takes a few seconds.
-4. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob Storage account.
+ :::image type="content" source="./media/container-apps-quickstart/container-app-validation.png" alt-text="Screenshot of the Azure portal, Container App connection validation.":::
-5. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Blob Storage so that your container apps can reach the Blob Storage.
+1. Select **Create** to create the service connection. The operation can take up to a minute to complete.
-6. Then select **Next: Review + Create** to review the provided information. Running the final validation takes a few seconds. Then select **Create** to create the service connection. It might take one minute to complete the operation.
+## View service connections
-## View service connections in Container Apps
+1. Container Apps connections are displayed under **Settings > Service Connector**.
-1. In **Service Connector**, select **Refresh** and you'll see a Container Apps connection displayed.
+1. Select **>** to expand the list and see the environment variables required by your application.
-1. Select **>** to expand the list. You can see the environment variables required by your application code.
+1. Select **Validate** check your connection status, and select **Learn more** to review the connection validation details.
-1. Select **...** and then **Validate**. You can see the connection validation details in the pop-up panel on the right.
+ :::image type="content" source="./media/container-apps-quickstart/validation-result.png" alt-text="Screenshot of the Azure portal, get connection validation result.":::
## Next steps
-Follow the tutorials listed below to start building your own application with Service Connector:
+Check the guide below for more information about Service Connector:
> [!div class="nextstepaction"]
-> [Service Connector internals](./concept-service-connector-internals.md)
+> [Service Connector internals](./concept-service-connector-internals.md)
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Previously updated : 08/12/2022 Last updated : 08/17/2022 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
service-fabric Service Fabric Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-overview.md
Compared to virtual machines, containers have the following advantages:
Service Fabric supports the deployment of Docker containers on Linux, and Windows Server containers on Windows Server 2016 and later, along with support for Hyper-V isolation mode. Container runtimes compatible with ServiceFabric:-- Linux: Mirantis Container Runtime + Ubuntu-- Windows: Mirantis Container Runtime + Windows Server 2019/2022
+- Linux: Docker
+- Windows:
+ - Windows Server 2022: Mirantis Container Runtime
+ - Windows Server 2019/2016: DockerEE
+ #### Docker containers on Linux
service-fabric Service Fabric Reliable Services Reliable Collections Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md
The guidelines are organized as simple recommendations prefixed with the terms *
* Do not modify an object of custom type returned by read operations (for example, `TryPeekAsync` or `TryGetValueAsync`). Reliable Collections, just like Concurrent Collections, return a reference to the objects and not a copy. * Do deep copy the returned object of a custom type before modifying it. Since structs and built-in types are pass-by-value, you do not need to do a deep copy on them unless they contain reference-typed fields or properties that you intend to modify.
-* Do not use `TimeSpan.MaxValue` for time-outs. Time-outs should be used to detect deadlocks.
+* Do not use `TimeSpan.MaxValue` for timeouts. Timeouts should be used to detect deadlocks.
* Do not use a transaction after it has been committed, aborted, or disposed. * Do not use an enumeration outside of the transaction scope it was created in. * Do not create a transaction within another transaction's `using` statement because it can cause deadlocks.
The guidelines are organized as simple recommendations prefixed with the terms *
Here are some things to keep in mind:
-* The default time-out is four seconds for all the Reliable Collection APIs. Most users should use the default time-out.
+* The default timeout is 4 seconds for all the Reliable Collection APIs. Most users should use the default timeout.
* The default cancellation token is `CancellationToken.None` in all Reliable Collections APIs. * The key type parameter (*TKey*) for a Reliable Dictionary must correctly implement `GetHashCode()` and `Equals()`. Keys must be immutable. * To achieve high availability for the Reliable Collections, each service should have at least a target and minimum replica set size of 3.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat
2. Install as follows (root account is not required, but root permissions are required): ```shell
- sudo ./install -d <Install Location> -r Agent -v VmWare -q
+ sudo ./install -r MS -v VmWare -d <Install Location> -q
``` 3. After the installation is finished, the Mobility service must be registered to the configuration server. Run the following command to register the Mobility service with the configuration server.
Agent configuration logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentConfigurat
Setting | Details |
-Syntax | `./install -d \<Install Location> -r \<MS/MT> -v VmWare -q`
+Syntax | `./install -r MS -v VmWare [-d <Install Location>] [-q]`
`-r` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed. `-d` | Optional parameter. Specifies the Mobility service installation location: `/usr/local/ASR`. `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.
Syntax | `./install -d \<Install Location> -r \<MS/MT> -v VmWare -q`
Setting | Details |
-Syntax | `cd /usr/local/ASR/Vx/bin<br/><br/> UnifiedAgentConfigurator.sh -i \<CSIP> -P \<PassphraseFilePath>`
+Syntax | `cd /usr/local/ASR/Vx/bin`</br> `UnifiedAgentConfigurator.sh -i \<CSIP> -P \<PassphraseFilePath>`
`-i` | Mandatory parameter. `<CSIP>` specifies the configuration server's IP address. Use any valid IP address.
-`-P` | Mandatory. Full file path of the file in which the passphrase is saved. Use any valid folder.
+`-P` | Mandatory. Full file path of the file in which the passphrase is saved. [Learn more](/azure/site-recovery/vmware-azure-manage-configuration-server#generate-configuration-server-passphrase).
## Azure Virtual Machine agent
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
Setting | Details |
- Syntax | `cd <InstallLocation>/Vx/bin UnifiedAgentConfigurator.sh -c CSPrime -S -q`
- `-s` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder.
+ Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q`
+ `-S` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder.
`-c` | Mandatory. Used to define preview or legacy architecture. (CSPrime or CSLegacy). `-q` | Optional. Specifies whether to run the installer in silent mode.
spring-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Set-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccount
First, install the preview extension for the Azure CLI if it's not already installed: ```azurecli
-az extension add -name storage-preview
+az extension add --name storage-preview
``` Then, to enable SFTP support, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and set the `--enable-sftp` parameter to true. Remember to replace the values in angle brackets with your own values:
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
synapse-analytics Quick Start Create Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/quick-start-create-lake-database.md
Title: QuickStart on Azure Synapse lake database and database templates
-description: Quickstart how to use the synapse lake database and the database templates
+ Title: Quickstart on Azure Synapse lake database and database templates
+description: Quickstart how to create a new lake database leveraging database templates.
+ Previously updated : 11/02/2021 Last updated : 08/16/2022
-# Quickstart: Create a new Lake database leveraging database templates
+# Quickstart: Create a new lake database leveraging database templates
-This quick start gives you a run through of an end-2-end scenario on how you can apply the database templates to create a lake database, align data to your new model, and use the integrated experience to analyze the data.
+This quick start gives you a complete sample scenario on how you can apply database templates to create a lake database, align data to your new model, and use the integrated experience to analyze the data.
## Prerequisites-- At least Synapse User role permissions are required for exploring a lake database template from Gallery.-- Synapse Administrator, or Synapse Contributor permissions are required on the Synapse workspace for creating a lake database.-- Storage Blob Data Contributor permissions are required on data lake when using create table **From data lake** option.
+- At least **Synapse User** role permissions are required for exploring a lake database template from Gallery.
+- **Synapse Administrator**, or **Synapse Contributor** permissions are required on the Azure Synapse workspace for creating a lake database.
+- **Storage Blob Data Contributor** permissions are required on data lake when using create table **From data lake** option.
## Create a lake database from database templates Use the new database templates functionality to create a lake database that you can use to configure your data model for the database.
-For our scenario we will use the Retail database template and select the following entities:
+For our scenario we will use the `Retail` database template and select the following entities:
+ - **RetailProduct** - A product is anything that can be offered to a market that might satisfy a need by potential customers. That product is the sum of all physical, psychological, symbolic, and service attributes associated with it. - **Transaction** - The lowest level of executable work or customer activity. A transaction consists of one or more discrete events.
A transaction consists of one or more discrete events.
- **Party** - A party is an individual, organization, legal entity, social organization, or business unit of interest to the business. - **Customer** - A customer is an individual or legal entity that has or has purchased a product or service. - **Channel** - A channel is a means by which products or services are sold and/or distributed.
-The easiest way to find them is by using the search box above the different business areas that contain the tables.
+
+The easiest way to find entities is by using the search box above the different business areas that contain the tables.
![Database Template example](./media/quick-start-create-lake-database/model-example.png) ## Configure lake database
-After you have created the database, make sure the storage account & filepath is set to a location where you wish to store the data. The path will default to the primary storage account within Synapse analytics but can be changed to your needs.
+After you have created the database, make sure the storage account and the filepath is set to a location where you wish to store the data. The path will default to the primary storage account within Azure Synapse Analytics but can be changed to your needs.
![Lake database example](./media/quick-start-create-lake-database/lake-database-example.png)
-To save your layout and make it available within Synapse Publish all changes. This step completes the setup of the lake database and makes it available to all components within Synapse Analytics and outside.
+To save your layout and make it available within Azure Synapse, **Publish** all changes. This step completes the setup of the lake database and makes it available to all components within Azure Synapse Analytics and outside.
## Ingest data to lake database
INSERT INTO `retail_mil`.`customer` VALUES (1,'2021-02-18',1022,557,101,'Tailspi
## Query the data
-After the lake database is created, there are different ways to query the data. Currently we support SQL-Ondemand within Synapse that automatically understands the newly created lake database format and exposes the data through it.
+After the lake database is created, there are different ways to query the data. Currently, SQL databases in serverless SQL pools are supported and automatically understand the newly created lake database format.
```sql SELECT TOP (100) [ProductId]
SELECT TOP (100) [ProductId]
FROM [Retail_mil].[dbo].[RetailProduct] ```
-The other way to access the data within Synapse is to open a new Spark notebook and use the integrated experience there:
+The other way to access the data within Azure Synapse is to open a new Spark notebook and use the integrated experience there:
```spark df = spark.sql("SELECT * FROM `Retail_mil`.`RetailProduct`")
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/database.md
Title: Shared database
-description: Azure Synapse Analytics provides a shared metadata model where creating a Lake database in an Apache Spark pool will make it accessible from its serverless SQL pool engine.
+ Title: Lake database in serverless SQL pools
+description: Azure Synapse Analytics provides a shared metadata model where creating a lake database in an Apache Spark pool will make it accessible from its serverless SQL pool engine.
Previously updated : 10/05/2021 Last updated : 08/16/2022 -+
-# Azure Synapse Analytics shared Lake database
+# Access lake databases using serverless SQL pool in Azure Synapse Analytics
-Azure Synapse Analytics allows the different computational workspace engines to share [Lake databases](../database-designer/concepts-lake-database.md) and tables. Currently, the Lake databases and the tables (Parquet or CSV backed) that are created on the Apache Spark pools, [Database templates](../database-designer/concepts-database-templates.md) or Dataverse are automatically shared with the serverless SQL pool engine.
+The Azure Synapse Analytics workspace enables you to create two types of databases on top of a Spark data lake:
-A Lake database will become visible with that same name to all current and future Spark pools in the workspace, including the serverless SQL pool engine. You cannot add custom SQL objects (external tables, views, procedures, functions, schema, users) directly in a Lake database using the serverless SQL pool.
+- **Lake databases** where you can define tables on top of lake data using Apache Spark notebooks, database templates, or Microsoft Dataverse (previously Common Data Service). These tables will be available for querying using T-SQL (Transact-SQL) language using the serverless SQL pool.
+- **SQL databases** where you can define your own databases and tables directly using the serverless SQL pools. You can use T-SQL CREATE DATABASE, CREATE EXTERNAL TABLE to define the objects and add additional SQL views, procedures, and inline-table-value functions on top of the tables.
-The Spark default database, called `default`, will also be visible in the serverless SQL pool context as a Lake database called `default`.
-You can't create a Lake database and then create another database with the same name in the serverless SQL pool.
+This article focuses on [lake databases](../database-designer/concepts-lake-database.md) in a serverless SQL pool in Azure Synapse Analytics.
-The Lake databases are created in the serverless SQL pool asynchronously. There will be a delay until they appear.
+Azure Synapse Analytics allows you to create lake databases and tables using Spark or database designer, and then analyze data in the lake databases using the serverless SQL pool. The lake databases and the tables (parquet or CSV-backed) that are created on the Apache Spark pools, [database templates](../database-designer/concepts-database-templates.md), or Dataverse are automatically available for querying with the serverless SQL pool engine. The lake databases and tables that are modified will be available in serverless SQL pool after some time. There will be a delay until the changes made in Spark or Database designed appear in serverless.
-## Manage Lake database
+## Manage lake database
-To manage Spark created Lake databases, you can use Apache Spark pools or [Database designer](../database-designer/create-empty-lake-database.md). For example, create or delete a Lake database through a Spark pool job.
+To manage Spark created lake databases, you can use Apache Spark pools or [Database designer](../database-designer/create-empty-lake-database.md). For example, create or delete a lake database through a Spark pool job. You can't create a lake database or the objects in the lake databases using the serverless SQL pool.
-Objects in the Lake databases cannot be modified from a serverless SQL pool. Use [Database designer](../database-designer/modify-lake-database.md) or Apache Spark pools to modify the Lake databases.
+The Spark default database is available in the serverless SQL pool context as a lake database called `default`.
>[!NOTE]
->You cannot create multiple databases with the same name from different pools. If a SQL database in the serverless SQL pool is created, you won't be able to create a Lake database with the same name. Respectively, if you create a Lake database, you won't be able to create a serverless SQL pool database with the same name.
+> You cannot create a lake and a SQL database in the serverless SQL pool with the same name.
+
+Tables in the lake databases cannot be modified from a serverless SQL pool. Use the [Database designer](../database-designer/modify-lake-database.md) or Apache Spark pools to modify a lake database. The serverless SQL pool enables you to make the following changes in a lake database using Transact-SQL commands:
+
+- Adding, altering, and dropping views, procedures, inline table-value functions in a lake database.
+- Adding and removing database-scoped Azure AD users.
+- Add or remove Azure AD database users to the **db_datareader** role. Azure AD database users in the **db_datareader** role have permission to read all tables in the lake database, but cannot read data from other databases.
## Security model
-The Lake databases and tables will be secured at the underlying storage level.
+The lake databases and tables are secured at two levels:
+
+- The underlying storage layer by assigning to Azure AD users one of the following:
+ - Azure role-based access control (Azure RBAC)
+ - Azure attribute-based access control (Azure ABAC) role
+ - ACL permissions
+- The SQL layer where you can define an Azure AD user and grant SQL permissions to SELECT data from tables referencing the lake data.
+
+## Lake security model
+
+Access to lake database files is controlled using the lake permissions on storage layer. Only Azure AD users can use tables in the lake databases, and they can access the data in the lake using their own identities.
-The security principal who creates a database is considered the owner of that database, and has all the rights to the database and its objects. `Synapse Administrator` and `Synapse SQL Administrator` will also have all the permissions on synchronized objects in a serverless SQL pool by default. Creating custom objects (including users) in synchronized SQL databases is not allowed.
+You can grant access to the underlying data used for external tables to a security principal, such as: a user, an Azure AD application with [assigned service principal](../../active-directory/develop/howto-create-service-principal-portal.md), or a security group. For data access, grant both of the following permissions:
-To give a security principal, such as a user, Azure AD app, or a security group, access to the underlying data used for external tables, you need to give them `read (R)` permissions on files (such as the table's underlying data files) and `execute (X)` on the folder where the files are stored + on every parent folder up to the root. You can read more about these permissions on [Access control lists(ACLs)](../../storage/blobs/data-lake-storage-access-control.md) page.
+- Grant `read (R)` permission on files (such as the table's underlying data files).
+- Grant `execute (X)` permission on the folder where the files are stored and on every parent folder up to the root. You can read more about these permissions on [Access control lists(ACLs)](../../storage/blobs/data-lake-storage-access-control.md) page.
-For example, in `https://<storage-name>.dfs.core.windows.net/<fs>/synapse/workspaces/<synapse_ws>/warehouse/mytestdb.db/myparquettable/`, security principals need to have `X` permissions on all the folders starting at the `<fs>` to the `myparquettable` and `R` permissions on `myparquettable` and files inside that folder, to be able to read a table in a database (synchronized or original one).
+For example, in `https://<storage-name>.dfs.core.windows.net/<fs>/synapse/workspaces/<synapse_ws>/warehouse/mytestdb.db/myparquettable/`, security principals need:
-If a security principal requires the ability to create objects or drop objects in a database, additional `W` permissions are required on the folders and files in the `warehouse` folder. Modifying objects in a database is not possible from serverless SQL pool, only from Spark pools and [database designer](../database-designer/modify-lake-database.md).
+- `execute (X)` permissions on all the folders starting at the `<fs>` to the `myparquettable`.
+- `read (R)` permissions on `myparquettable` and files inside that folder, to be able to read a table in a database (synchronized or original one).
+
+If a security principal requires the ability to create objects or drop objects in a database, additional `write (W)` permissions are required on the folders and files in the `warehouse` folder. Modifying objects in a database is not possible from serverless SQL pool, only from Spark pools or the [database designer](../database-designer/modify-lake-database.md).
### SQL security model
-Synapse workspace provides a T-SQL endpoint that enables you to query the Lake database using the serverless SQL pool. As a prerequisite, you need to enable a user to access the shared Lake databases using the serverless SQL pool. There are two ways to allow a user to access the Lake databases:
-- You can assign a `Synapse SQL Administrator` workspace role or `sysadmin` server-level role in the serverless SQL pool. This role has full control over all databases (note that the Lake databases are still read-only even for the administrator role).-- You can grant `GRANT CONNECT ANY DATABASE` and `GRANT SELECT ALL USER SECURABLES` server-level permissions on serverless SQL pool to a login that will enable the login to access and read any database. This might be a good choice for assigning reader/non-admin access to a user.
+The Azure Synapse workspace provides a T-SQL endpoint that enables you to query the lake database using the serverless SQL pool. In addition to the data access, SQL interface enables you to control who can access the tables. You need to enable a user to access the shared lake databases using the serverless SQL pool. There are two types of users who can access the lake databases:
+
+- Administrators: Assign the **Synapse SQL Administrator** workspace role or **sysadmin** server-level role inside the serverless SQL pool. This role has full control over all databases. The **Synapse Administrator** and **Synapse SQL Administrator** roles also have all permissions on all objects in a serverless SQL pool, by default.
+- Workspace readers: Grant the server-level permissions **GRANT CONNECT ANY DATABASE** and **GRANT SELECT ALL USER SECURABLES** on serverless SQL pool to a login that will enable the login to access and read any database. This might be a good choice for assigning reader/non-admin access to a user.
+- Database readers: Create database users from Azure AD in your lake database and add them to **db_datareader** role, which will enable them to read data in the lake database.
Learn more about [setting access control on shared databases here](../sql/shared-databases-access-control.md).
-## Custom SQL metadata objects
+## Custom SQL objects in lake databases
+
+Lake databases allow creation of custom T-SQL objects, such as schemas, procedures, views, and the inline table-value functions (iTVFs). In order to create custom SQL objects, you **MUST** create a schema where you will place the objects. Custom SQL objects cannot be placed in `dbo` schema because it is reserved for the lake tables that are defined in Spark, database designer, or Dataverse.
-Lake databases do not allow creation of custom T-SQL objects, such as schemas, users, procedures, views, and the external tables created on custom locations. If you need to create additional T-SQL objects that reference the shared tables in the Lake database, you have two options:
-- Create a custom SQL database (serverless) that will contain the custom schemas, views, and functions that will reference Lake database external tables using the 3-part names.-- Instead of Lake database use SQL database (serverless) that will reference data in the lake. SQL database (serverless) enables you to create external tables that can reference data in the lake same way as the Lake database, but it allows creation of additional SQL objects. A drawback is that these objects are not automatically available in Spark.
+> [!IMPORTANT]
+> You must create custom SQL schema where you will place your SQL objects. The custom SQL objects cannot be placed in the `dbo` schema. The `dbo` schema is reserved for the lake tables that are originally created in Spark or database designer.
## Examples
+### Create SQL database reader in lake database
+
+In this example, we are adding an Azure AD user in the lake database who can read data via shared tables. The users are added in the lake database via the serverless SQL pool. Then, assign the user to the **db_datareader** role so they can read data.
+
+```sql
+CREATE USER [customuser@contoso.com] FROM EXTERNAL PROVIDER;
+GO
+ALTER ROLE db_datareader
+ADD MEMBER [customuser@contoso.com];
+```
+ ### Create workspace-level data reader
-A login with `GRANT CONNECT ANY DATABASE` and `GRANT SELECT ALL USER SECURABLES` permisisons is able to read all tables using the serverless SQL pool, but not able to create SQL databases or modify the objects in them.
+A login with `GRANT CONNECT ANY DATABASE` and `GRANT SELECT ALL USER SECURABLES` permissions is able to read all tables using the serverless SQL pool, but not able to create SQL databases or modify the objects in them.
```sql CREATE LOGIN [wsdatareader@contoso.com] FROM EXTERNAL PROVIDER
GRANT CONNECT ANY DATABASE TO [wsdatareader@contoso.com]
GRANT SELECT ALL USER SECURABLES TO [wsdatareader@contoso.com] ```
-This script enables you to create users without admin priviliges who can read any table in Lake databases.
+This script enables you to create users without admin privileges who can read any table in Lake databases.
### Create and connect to Spark database with serverless SQL pool
-First create a new Spark database named `mytestdb` using a Spark cluster you have already created in your workspace. You can achieve that, for example, using a Spark C# Notebook with the following .NET for Spark statement:
+First, create a new Spark database named `mytestdb` using a Spark cluster you have already created in your workspace. You can achieve that, for example, using a Spark C# Notebook with the following .NET for Spark statement:
```csharp spark.Sql("CREATE DATABASE mytestlakedb") ```
-After a short delay, you can see the Lake database from serverless SQL pool. For example, run the following statement from serverless SQL pool.
+After a short delay, you can see the lake database from serverless SQL pool. For example, run the following statement from serverless SQL pool.
```sql SELECT * FROM sys.databases;
SELECT * FROM sys.databases;
Verify that `mytestlakedb` is included in the results.
+### Create custom SQL objects in lake database
+
+The following example shows how to create a custom view, procedure, and inline table-value function (iTVF) in the `reports` schema:
+
+```sql
+CREATE SCHEMA reports
+GO
+
+CREATE OR ALTER VIEW reports.GreenReport
+AS SELECT puYear, puMonth,
+ fareAmount = SUM(fareAmount),
+ tipAmount = SUM(tipAmount),
+ mtaTax = SUM(mtaTax)
+FROM dbo.green
+GROUP BY puYear, puMonth
+GO
+
+CREATE OR ALTER PROCEDURE reports.GreenReportSummary
+AS BEGIN
+SELECT puYear, puMonth,
+ fareAmount = SUM(fareAmount),
+ tipAmount = SUM(tipAmount),
+ mtaTax = SUM(mtaTax)
+FROM dbo.green
+GROUP BY puYear, puMonth
+END
+GO
+
+CREATE OR ALTER FUNCTION reports.GreenDataReportMonthly(@year int)
+RETURNS TABLE
+RETURN ( SELECT puYear = @year, puMonth,
+ fareAmount = SUM(fareAmount),
+ tipAmount = SUM(tipAmount),
+ mtaTax = SUM(mtaTax)
+ FROM dbo.green
+ WHERE puYear = @year
+ GROUP BY puMonth )
+GO
+```
++ ## Next steps - [Learn more about Azure Synapse Analytics' shared metadata](overview.md) - [Learn more about Azure Synapse Analytics' shared metadata Tables](table.md)
+- [Quickstart: Create a new lake database leveraging database templates](../database-designer/quick-start-create-lake-database.md)
+- [Tutorial: Use serverless SQL pool with Power BI Desktop & create a report](../sql/tutorial-connect-power-bi-desktop.md)
+- [Synchronize Apache Spark for Azure Synapse external table definitions in serverless SQL pool](../sql/develop-storage-files-spark-tables.md)
+- [Tutorial: Explore and Analyze data lakes with serverless SQL pool](../sql/tutorial-data-analyst.md)
+
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
United Kingdom | UK South
The following table lists the supported operating systems for Azure VMs and Azure Arc-enabled servers. Before you enable update management center (preview), ensure that the target machines meet the operating system requirements.
+>[!NOTE]
+> For Azure VMs, we currently support a combination of Offer, Publisher, and SKU of the VM image. Ensure you match all three to confirm support.
+ # [Azure VMs](#tab/azurevm-os) [Azure VMs](../virtual-machines/index.yml) are:
virtual-desktop App Attach Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-azure-portal.md
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\ContentDeliveryManager\De
```
-## Configure the MSIX app attach management interface
-
-Next, you'll need to download and configure the the MSIX app attach management interface for the Azure portal.
-
-To set up the management interface:
-
-1. [Open the Azure portal](https://portal.azure.com).
-2. If you get a prompt asking if you consider the extension trustworthy, select **Allow**.
-
- > [!div class="mx-imgBorder"]
- > ![A screenshot of the untrusted extensions window. "Allow" is highlighted in red.](media/untrusted-extensions.png)
-- ## Add an MSIX image to the host pool Next you'll need to add the MSIX image to your host pool.
To publish the apps:
After assigning MSIX apps to an app group, you'll need to grant users access to them. You can assign access by adding users or user groups to an app group with published MSIX applications. Follow the instructions in [Manage app groups with the Azure portal](manage-app-groups.md) to assign your users to an app group.
->[!NOTE]
->MSIX app attach remote apps may disappear from the feed when you test remote apps during public preview. The apps don't appear because the host pool you're using in the evaluation environment is being served by an RD Broker in the production environment. Because the RD Broker in the production environment doesn't register the presence of the MSIX app attach remote apps, the apps won't appear in the feed.
- ## Change MSIX package state Next, you'll need to change the MSIX package state to either **Active** or **Inactive**, depending on what you want to do with the package. Active packages are packages your users can interact with once they're published. Inactive packages are ignored by Azure Virtual Desktop, so your users can't interact with the apps inside.
virtual-machine-scale-sets Tutorial Autoscale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-powershell.md
# Tutorial: Automatically scale a virtual machine scale set with Azure PowerShell > [!NOTE]
-> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This tutorial uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchestration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
[!INCLUDE [requires-azurerm](../../includes/requires-azurerm.md)]
virtual-machines Tutorial Manage Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-disks.md
When you take a disk snapshot, Azure creates a read only, point-in-time copy of
### Create snapshot
-Before you create a snapshot, you need the ID or name of the disk. Use [az vm show](/cli/azure/vm#az-vm-show) to shot the disk ID. In this example, the disk ID is stored in a variable so that it can be used in a later step.
+Before you create a snapshot, you need the ID or name of the disk. Use [az vm show](/cli/azure/vm#az-vm-show) to show the disk ID. In this example, the disk ID is stored in a variable so that it can be used in a later step.
```azurecli-interactive osdiskid=$(az vm show \
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
In the rare case where VMs need to be rebooted for planned maintenance, you'll b
During the *self-service phase*, which typically lasts four weeks, you start the maintenance on your VMs. As part of the self-service, you can query each VM to see its status and the result of your last maintenance request.
+> [!NOTE]
+> For VM-series that do not support [Live Migration](#live-migration), local (ephemeral) disks data can be lost during the maintenance events. See each individual VM-series for information on if Live Migration is supported.
+ When you start self-service maintenance, your VM is redeployed to an already updated node. Because the VM is redeployed, the temporary disk is lost and dynamic IP addresses associated with the virtual network interface are updated. If an error arises during self-service maintenance, the operation stops, the VM isn't updated, and you get the option to retry the self-service maintenance.
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/premium-storage-performance.md
For more information on VM sizes and on the IOPS, throughput, and latency availa
| **VM size** |Use a VM size that offers IOPS greater than your application requirement. |Use a VM size with throughput limit greater than your application requirement. |Use a VM size that offers scale limits greater than your application requirement. | | **Disk size** |Use a disk size that offers IOPS greater than your application requirement. |Use a disk size with Throughput limit greater than your application requirement. |Use a disk size that offers scale limits greater than your application requirement. | | **VM and Disk Scale Limits** |IOPS limit of the VM size chosen should be greater than total IOPS driven by storage disks attached to it. |Throughput limit of the VM size chosen should be greater than total Throughput driven by premium storage disks attached to it. |Scale limits of the VM size chosen must be greater than total scale limits of attached premium storage disks. |
-| **Disk Caching** |Enable ReadOnly Cache on premium storage disks with Read heavy operations to get higher Read IOPS. | &nbsp; |Enable ReadOnly Cache on premium storage disks with Ready heavy operations to get very low Read latencies. |
+| **Disk Caching** |Enable ReadOnly Cache on premium storage disks with Read heavy operations to get higher Read IOPS. | &nbsp; |Enable ReadOnly Cache on premium storage disks with Read heavy operations to get very low Read latencies. |
| **Disk Striping** |Use multiple disks and stripe them together to get a combined higher IOPS and Throughput limit. The combined limit per VM should be higher than the combined limits of attached premium disks. | &nbsp; | &nbsp; | | **Stripe Size** |Smaller stripe size for random small IO pattern seen in OLTP applications. For example, use stripe size of 64 KB for SQL Server OLTP application. |Larger stripe size for sequential large IO pattern seen in Data Warehouse applications. For example, use 256 KB stripe size for SQL Server Data warehouse application. | &nbsp; | | **Multithreading** |Use multithreading to push higher number of requests to Premium Storage that will lead to higher IOPS and Throughput. For example, on SQL Server set a high MAXDOP value to allocate more CPUs to SQL Server. | &nbsp; | &nbsp; |
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
virtual-machines Share Gallery Direct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md
There are three main ways to share images in an Azure Compute Gallery, depending
## Limitations During the preview:-- You can only share to subscriptions that are also in the preview. - You can only share to 30 subscriptions and 5 tenants. - Only images can be shared. You can't directly share a [VM application](vm-applications.md) during the preview. - A direct shared gallery can't contain encrypted image versions. Encrypted images can't be created within a gallery that is directly shared.
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
This article attempts to list recent common issues and their solutions when using the [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) HPC and GPU VMs.
+## InfiniBand Errors on HBv3
+As of the week of August 12, we've identified a bug in the firmware of the ConnectX-6 InfiniBand NIC adapters in HBv3-series VMs that can cause MPI jobs to fail on a transient basis. This issue applies to all VM sizes within the HBv3-series. This issue doesn't apply to other H-series VMs (HB-series, HBv2-series, or HC-series). A firmware update will be issued in the coming days to remediate this issue.
+ ## Memory Capacity on Standard_HB120rs_v2
-As of the week of December 6, 2021 we are temporarily reducing the amount of memory (RAM) exposed to the Standard_HB120rs_v2 VM size, otherwise known as [HBv2](../../hbv2-series.md). We are reducing the memory footprint to 432 GB from its current value of 456 GB (a 5.2% reduction). This reduction is temporary and the full memory capacity should be restored in early 2022. We are making this change to ensure to address an issue that can result in long VM deployment times or VM deployments for which not all devices function correctly. Note that the reduction in memory capacity does not affect VM performance.
+As of the week of December 6, 2021 we've temporarily reducing the amount of memory (RAM) exposed to the Standard_HB120rs_v2 VM size, otherwise known as [HBv2](../../hbv2-series.md). We've reducing the memory footprint to 432 GB from its current value of 456 GB (a 5.2% reduction). This reduction is temporary and the full memory capacity should be restored in early 2022. We've made this change to ensure to address an issue that can result in long VM deployment times or VM deployments for which not all devices function correctly. The reduction in memory capacity doesn't affect VM performance.
## Cache topology on Standard_HB120rs_v3
-`lstopo` displays incorrect cache topology on the Standard_HB120rs_v3 VM size. It may display that thereΓÇÖs only 32 MB L3 per NUMA. However in practice there is indeed 120 MB L3 per NUMA as expected since the same 480 MB of L3 to the entire VM is available as with the other constrained-core HBv3 VM sizes. This is a cosmetic error in displaying the correct value, which should not impact workloads.
+`lstopo` displays incorrect cache topology on the Standard_HB120rs_v3 VM size. It may display that thereΓÇÖs only 32 MB L3 per NUMA. However in practice, there is indeed 120 MB L3 per NUMA as expected since the same 480 MB of L3 to the entire VM is available as with the other constrained-core HBv3 VM sizes. This is a cosmetic error in displaying the correct value, which should not impact workloads.
## qp0 Access Restriction To prevent low-level hardware access that can result in security vulnerabilities, Queue Pair 0 is not accessible to guest VMs. This should only affect actions typically associated with administration of the ConnectX InfiniBand NIC, and running some InfiniBand diagnostics like ibdiagnet, but not end-user applications. ## MOFED installation on Ubuntu On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1.
-If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image or older and not to update the kernel.
+If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image, or older and not to update the kernel.
## MPI QP creation errors
-If in the midst of running any MPI workloads, InfiniBand QP creation errors such as shown below, are thrown, we suggest rebooting the VM and re-trying the workload. This issue will be fixed in the future.
+If in the midst of running any MPI workloads, InfiniBand QP creation errors such as shown below, are thrown, we suggest rebooting the VM and retrying the workload. This issue will be fixed in the future.
```bash ib_mlx5_dv.c:150 UCX ERROR mlx5dv_devx_obj_create(QP) failed, syndrome 0: Invalid argument
max_qp: 4096
## Accelerated Networking on HB, HC, HBv2, HBv3 and NDv2
-[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md) and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0) and this may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.
+[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md) and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.
More details on this are available on this [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-and-hbv2/ba-p/2067965) with instructions on how to address any observed issues. ## InfiniBand driver installation on non-SR-IOV VMs
InfiniBand can be configured on the SR-IOV enabled VM sizes with the OFED driver
## Duplicate MAC with cloud-init with Ubuntu on H-series and N-series VMs
-There is a known issue with cloud-init on Ubuntu VM images as it tries to bring up the IB interface. This can happen either on VM reboot or when trying to create a VM image after generalization. The VM boot logs may show an error like so:
+There's a known issue with cloud-init on Ubuntu VM images as it tries to bring up the IB interface. This can happen either on VM reboot or when trying to create a VM image after generalization. The VM boot logs may show an error like so:
```console ΓÇ£Starting Network Service...RuntimeError: duplicate mac found! both 'eth1' and 'ib0' have macΓÇ¥. ```
-This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. This will be resolved in newer kernels. IF the issue is encountered, the workaround is:
+This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. This will be resolved in newer kernels. If this issue is encountered, the workaround is:
1) Deploy the (Ubuntu 18.04) marketplace VM image 2) Install the necessary software packages to enable IB ([instruction here](https://techcommunity.microsoft.com/t5/azure-compute/configuring-infiniband-for-ubuntu-hpc-and-gpu-vms/ba-p/1221351)) 3) Edit waagent.conf to change EnableRDMA=y
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 05/26/2022 Last updated : 08/16/2022
Use the following content for the input file. You need to adapt the content to y
### **[A]** Assign the custom role to the Service Principal Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Do not use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
-Make sure to assign the role for both cluster nodes.
+Make sure to assign the custom role to the service principal at all VM (cluster node) scopes.
### **[1]** Create the STONITH devices
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 05/26/2022 Last updated : 08/16/2022
Use the following content for the input file. You need to adapt the content to y
Assign the custom role *Linux fence agent Role* that you already created to the service principal. Do *not* use the *Owner* role anymore. For more information, see [Assign Azure roles by using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
-Be sure to assign the role for both cluster nodes.
+Make sure to assign the custom role to the service principal at all VM (cluster node) scopes.
## Install the cluster
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual Network NAT is a software defined networking service. A NAT gateway won'
* When NAT gateway is configured to a virtual network where standard Load balancer with outbound rules already exists, NAT gateway will take over all outbound traffic moving forward. There will be no drops in traffic flow for existing connections on Load balancer. All new connections will use NAT gateway.
-* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
+* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
+
+* The order of operations for outbound connectivity follows this order of precedence:
+Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP addresses on virtual machines >> Load balancer outbound rules >> default system
* NAT gateway supports TCP and UDP protocols only. ICMP isn't supported.
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
This article provides guidance on how to configure your NAT gateway to ensure ou
Check the following configurations to ensure that NAT gateway can be used to direct traffic outbound: 1. At least one public IP address or one public IP prefix is attached to NAT gateway. At least one public IP address must be associated with the NAT gateway for it to provide outbound connectivity. 2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway cannot span beyond a single virtual network.
-3. No [NSG rules](../network-security-groups-overview.md#outbound) or [UDRs](#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) are blocking NAT gateway from directing traffic outbound to the internet.
+3. No [NSG rules](../network-security-groups-overview.md#outbound) or [UDRs](#virtual-appliance-udrs-and-expressroute-override-nat-gateway-for-routing-outbound-traffic) are blocking NAT gateway from directing traffic outbound to the internet.
### How to validate connectivity
Test and resolve issues with VMs holding on to old SNAT IP addresses by:
If you are still having trouble, open a support case for further troubleshooting.
-### Virtual appliance UDRs and VPN ExpressRoute override NAT gateway for routing outbound traffic
+### Virtual appliance UDRs and ExpressRoute override NAT gateway for routing outbound traffic
When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](../virtual-networks-udr-overview.md#custom-routes). The order of precedence for internet routing configurations is as follows:
-Virtual appliance UDR / VPN ExpressRoute >> NAT gateway >> default system
+Virtual appliance UDR / ExpressRoute >> NAT gateway >> instance level public IP addresses >> outbound rules on Load balancer >> default system
Test and resolve issues with a virtual appliance UDR or VPN ExpressRoute overriding your NAT gateway by: 1. [Testing that the NAT gateway public IP](./quickstart-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a custom UDR, follow the remaining steps on how to check for and remove custom UDRs.
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/12/2022 Last updated : 08/17/2022
web-application-firewall Waf Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/waf-azure-policy.md
Azure Web Application Firewall (WAF) combined with Azure Policy can help enforce
There are several built-in Azure Policy definitions to manage WAF resources. A breakdown of the policy definitions and their functionalities are as follows:
-1. **Web Application Firewall (WAF) should be enabled for Azure Front Door Service**: Azure Front Door Services are evaluated on if there is a WAF present on resource creation. The policy definition has three effects: Audit, Deny, and Disable. Audit tracks when an Azure Front Door Service does not have a WAF and lets users see what Azure Front Door Service does not comply. Deny prevents any Azure Front Door Service from being created if a WAF is not attached. Disabled turns off the policy assignment.
+1. **Azure Web Application Firewall should be enabled for Azure Front Door entry-points**: Azure Front Door Services are evaluated on if there is a WAF present on resource creation. The policy definition has three effects: Audit, Deny, and Disable. Audit tracks when an Azure Front Door Service does not have a WAF and lets users see what Azure Front Door Service does not comply. Deny prevents any Azure Front Door Service from being created if a WAF is not attached. Disabled turns off the policy assignment.
2. **Web Application Firewall (WAF) should be enabled for Application Gateway**: Application Gateways are evaluated on if there is a WAF present on resource creation. The policy definition has three effects: Audit, Deny, and Disable. Audit tracks when an Application Gateway does not have a WAF and lets users see what Application Gateway does not comply. Deny prevents any Application Gateway from being created if a WAF is not attached. Disabled turns off the policy assignment.