Updates from: 07/09/2021 03:05:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-datawiza.md
+
+ Title: Tutorial to configure Azure Active Directory B2C with Datawiza
+
+description: Learn how to integrate Azure AD B2C authentication with Datawiza for secure hybrid access
++++++ Last updated : 7/07/2021++++
+# Tutorial: Configure Azure AD B2C with Datawiza to provide secure hybrid access
+
+In this sample tutorial, learn how to integrate Azure Active Directory (AD) B2C with [Datawiza](https://www.datawiza.com/).
+Datawiza's [Datawiza Access Broker (DAB)](https://www.datawiza.com/access-broker) enables Single Sign-on (SSO) and granular access control extending Azure AD B2C to protect on-premises legacy applications. Using this solution enterprises can quickly transition from legacy to Azure AD B2C without rewriting applications.
+
+## Prerequisites
+
+To get started, you'll need:
+
+- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+
+- An [Azure AD B2C tenant](https://docs.microsoft.com/azure/active-directory-b2c/tutorial-create-tenant) that's linked to your Azure subscription.
+
+- [Docker](https://docs.docker.com/get-docker/) is required to run DAB. Your applications can run on any platform, such as virtual machine and bare metal.
+
+- An on-premises application that you'll transition from a legacy identity system to Azure AD B2C. In this sample, DAB is deployed on the same server where the application is. The application will run on localhost: 3001 and DAB proxies traffic to application via localhost: 9772. The traffic to the application will reach DAB first and then be proxied to the application.
+
+## Scenario description
+
+Datawiza integration includes the following components:
+
+- **Azure AD B2C**: The authorization server that's responsible for verifying the user's credentials. Authenticated users may access on-premises applications using a local account stored in the Azure AD B2C directory.
+
+- **Datawiza Access Broker (DAB)**: The service user sign-on and transparently passes identity to applications through HTTP headers.
+
+- **Datawiza Cloud Management Console (DCMC)** - A centralized management console that manages DAB. DCMC provides UI and Restful APIs for administrators to manage the configurations of DAB and its access control policies.
+
+The following architecture diagram shows the implementation.
+
+![Image show the architecture of an Azure AD B2C integration with Datawiza for secure access to hybrid applications](./media/partner-datawiza/datawiza-architecture-diagram.png)
+
+| Steps | Description |
+|:-|:|
+| 1. | The user makes a request to access the on-premises hosted application. DAB proxies the request made by the user to the application.|
+| 2. | The DAB checks the user's authentication state. If it doesn't receive a session token, or the supplied session token is invalid, then it sends the user to Azure AD B2C for authentication.|
+| 3. | Azure AD B2C sends the user request to the endpoint specified during the DAB application's registration in the Azure AD B2C tenant.|
+| 4. | The DAB evaluates access policies and calculates attribute values to be included in HTTP headers forwarded to the application. During this step, the DAB may call out to the IdP to retrieve the information needed to set the header values correctly. The DAB sets the header values and sends the request to the application. |
+|5. | The user is now authenticated and has access to the application.|
+
+## Onboard with Datawiza
+
+To integrate your legacy on-premises app with Azure AD B2C, contact [Datawiza](https://login.datawiza.com/df3f213b-68db-4966-bee4-c826eea4a310/b2c_1a_linkage/oauth2/v2.0/authorize?response_type=id_token&scope=openid%20profile&client_id=4f011d0f-44d4-4c42-ad4c-88c7bbcd1ac8&redirect_uri=https%3A%2F%2Fconsole.datawiza.com%2Fhome&state=eyJpZCI6Ijk3ZjI5Y2VhLWQ3YzUtNGM5YS1hOWU2LTg1MDNjMmUzYWVlZCIsInRzIjoxNjIxMjg5ODc4LCJtZXRob2QiOiJyZWRpcmVjdEludGVyYWN0aW9uIn0%3D&nonce=08e1b701-6e42-427b-894b-c5d655a9a6b0&client_info=1&x-client-SKU=MSAL.JS&x-client-Ver=1.3.3&client-request-id=3ac285ba-2d4d-4ae5-8dc2-9295ff6047c6&response_mode=fragment).
+
+## Configure your Azure AD B2C tenant
+
+1. [Register](https://docs.datawiza.com/idp/azureb2c.html#microsoft-azure-ad-b2c-configuration) your web application in Azure AD B2C tenant.
+
+2. [Configure a Sign-up and sign-in user flow](https://docs.datawiza.com/idp/azureb2c.html#configure-a-user-flow) in Azure management portal.
+
+ >[!Note]
+ >You'll need the tenant name, user flow name, client ID, and client secret later when you set up DAB in the DCMC.
+
+## Create an application on DCMC
+
+1. [Create an application](https://docs.datawiza.com/step-by-step/step2.html) and generate a key pair of `PROVISIONING_KEY` and `PROVISIONING_SECRET` for this application on the DCMC.
+
+2. [Configure Azure AD B2C](https://docs.datawiza.com/tutorial/web-app-azure-b2c.html#part-i-azure-ad-b2c-configuration) as the Identity Provider (IdP)
+
+![Image show values to configure Idp](./media/partner-datawiza/configure-idp.png)
+
+## Run DAB with a header-based application
+
+1. You can use either Docker or Kubernetes to run DAB. The docker image is needed for users to create a sample header-based application. See instructions on how to [configure DAB and SSO integration](https://docs.datawiza.com/step-by-step/step3.html) for more details and how to [deploy DAB with Kubernetes](https://docs.datawiza.com/tutorial/web-app-AKS.html) for Kubernetes-specific instructions. A sample docker image `docker-compose.yml file` is provided for you to download and use. Log in to the container registry to download the images of DAB and the header-based application. Follow [these instructions](https://docs.datawiza.com/step-by-step/step3.html#important-step).
+
+ ```YML
+ version: '3'
+
+
+ datawiza-access-broker:
+ image: registry.gitlab.com/datawiza/access-broker
+ container_name: datawiza-access-broker
+ restart: always
+ ports:
+ - "9772:9772"
+ environment:
+ PROVISIONING_KEY: #############################
+ PROVISIONING_SECRET: #############################
+
+ header-based-app:
+ image: registry.gitlab.com/datawiza/header-based-app
+ container_name: ab-demo-header-app
+ restart: always
+ environment:
+ CONNECTOR: B2C
+ ports:
+ - "3001:3001"
+ ```
+
+ 2. After executing `docker-compose -f docker-compose.yml up`, the header-based application should have SSO enabled with Azure AD B2C. Open a browser and type in `http://localhost:9772/`.
+
+3. An Azure AD B2C login page will show up.
+
+## Pass user attributes to the header-based application
+
+1. DAB gets user attributes from IdP and can pass the user attributes to the application via header or cookie. See the instructions on how to [pass user attributes](https://docs.datawiza.com/step-by-step/step4.html) such as email address, firstname, and lastname to the header-based application.
+
+2. After successfully configuring the user attributes, you should see the green check sign for each of the user attributes.
+
+ ![Image shows passed user attributes](./media/partner-datawiza/pass-user-attributes.png)
+
+## Test the flow
+
+1. Navigate to the on-premises application url.
+
+2. The DAB should redirect to the page you configured in your user flow.
+
+3. Select the IdP from the list on the page.
+
+4. Once you're redirected to the IdP, supply your credentials as requested, including a Azure AD Multi-Factor Authentication (MFA) token if required by that IdP.
+
+5. After successfully authenticating, you should be redirected to Azure AD B2C, which forwards the application request to the DAB redirect URI.
+
+6. The DAB evaluates policies, calculates headers, and sends the user to the upstream application.
+
+7. You should see the requested application.
+
+## Next steps
+
+For additional information, review the following articles:
+
+- [Custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-overview)
+
+- [Get started with custom policies in Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/custom-policy-get-started?tabs=applications)
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-gallery.md
# Azure Active Directory B2C ISV partners
-Our ISV partner network extends our solution capabilities to help you build seamless end-user experiences. With Azure AD B2C, you can integrate with ISV partners to enable Multi-Factor authentication (MFA) methods, do role-based access control, enable identity verification and proofing, improve security with bot detection and fraud protection, and meet Payment Services Directive 2 (PSD2) Secure Customer Authentication (SCA) requirements. Use our detailed sample walkthroughs to learn how to integrate apps with the ISV partners.
+Our ISV partner network extends our solution capabilities to help you build seamless end-user experiences. With Azure AD B2C, you can integrate with ISV partners to enable multifactor authentication (MFA) methods, do role-based access control, enable identity verification and proofing, improve security with bot detection and fraud protection, and meet Payment Services Directive 2 (PSD2) Secure Customer Authentication (SCA) requirements. Use our detailed sample walkthroughs to learn how to integrate apps with the ISV partners.
>[!NOTE] >The [Azure Active Directory B2C community site on GitHub](https://azure-ad-b2c.github.io/azureadb2ccommunity.io/) also provides sample custom policies from the community.
Microsoft partners with the following ISVs to provide secure hybrid access to on
| ISV partner | Description and integration walkthroughs | |:-|:--|
+| ![Screenshot of a Datawiza logo](./medi) enables SSO and granular access control for your applications and extends Azure AD B2C to protect on-premises legacy applications. |
| ![Screenshot of a Ping logo](./medi) enables secure hybrid access to on-premises legacy applications across multiple clouds. | | ![Screenshot of a strata logo](./medi) provides secure hybrid access to on-premises applications by enforcing consistent access policies, keeping identities in sync, and making it simple to transition applications from legacy identity systems to standards-based authentication and access control provided by Azure AD B2C. | | ![Screenshot of a zscaler logo](./medi) delivers policy-based, secure access to private applications and assets without the cost, hassle, or security risks of a VPN. |
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-custom-attributes.md
In the [Add claims and customize user input using custom policies](configure-use
Your Azure AD B2C directory comes with a [built-in set of attributes](user-profile-attributes.md). However, you often need to create your own attributes to manage your specific scenario, for example when:
-* A customer-facing application needs to persist a **LoyaltyId** attribute.
+* A customer-facing application needs to persist a **loyaltyId** attribute.
* An identity provider has a unique user identifier, **uniqueUserGUID**, that must be persisted. * A custom user journey needs to persist the state of the user, **migrationStatus**, for other logic to operate on.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 05/11/2021 Last updated : 07/07/2021
For SCIM applications, the attribute name must follow the pattern shown in the e
These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce are not integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute.
-Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery.
+Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example below as it is not sent in requests from the Azure AD SCIM client. This issue will be fixed in the future and the header will be sent in the request.
**Example representation of a user with an extension attribute:** ```json { "schemas": ["urn:ietf:params:scim:schemas:core:2.0:User",
- "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User",
- "urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User"],
+ "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"],
"userName":"bjensen", "id": "48af03ac28ad4fb88478", "externalId":"bjensen",
active-directory Howto Password Ban Bad On Premises Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-troubleshoot.md
The Test-AzureADPasswordProtectionDCAgentHealth cmdlet supports several health t
### Basic DC agent health tests
-The following tests can all be run individually and do not accept. A brief description
+The following tests can all be run individually and do not accept parameters. A brief description of each test is listed in the following table.
|DC agent health test|Description| | | :: |
active-directory How To Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-expression-builder.md
Title: 'How to use expression builder with Azure AD Connect cloud sync'
+ Title: 'Use the expression builder with Azure AD Connect cloud sync'
description: This article describes how to use the expression builder with cloud sync.
# Expression builder with cloud sync
-The expression builder is a new blade in Azure located under cloud sync. It helps in building complex expressions and allows you to test these expressions before you apply them to your cloud sync environment.
+The expression builder is a new function in Azure located under cloud sync. It helps you build complex expressions. You can use it to test these expressions before you apply them to your cloud sync environment.
## Use the expression builder
-To access the expression builder, use the following steps.
-
- 1. In the Azure portal, select **Azure Active Directory**
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
- 4. Under **Configuration**, select your configuration.
- 5. Under **Manage attributes**, select **Click to edit mappings**.
- 6. On the **Edit attribute mappings** blade, click **Add attribute mapping**.
- 7. Under **Mapping type**, select **Expression**.
- 8. Select **Try the expression builder (Preview)**.
- ![Use expression builder](media/how-to-expression-builder/expression-1.png)
+To access the expression builder:
+
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 1. Select **Azure AD Connect**.
+ 1. Select **Manage cloud sync**.
+ 1. Under **Configuration**, select your configuration.
+ 1. Under **Manage attributes**, select **Click to edit mappings**.
+ 1. On the **Edit attribute mappings** pane, select **Add attribute mapping**.
+ 1. Under **Mapping type**, select **Expression**.
+ 1. Select **Try the expression builder (Preview)**.
+
+ ![Screenshot that shows using expression builder.](media/how-to-expression-builder/expression-1.png)
## Build an expression
-This section allows you to use the drop-down to select from a list of supported functions. Then it provides additional fields for you to fill in, depending on the function selected. Once you select **Apply expression**, the syntax will appear in the **Expression input** box.
+In this section, you use the dropdown list to select from supported functions. Then you fill in more boxes, depending on the function selected. After you select **Apply expression**, the syntax appears in the **Expression input** box.
-For example, by selecting **Replace** from the drop-down, additional boxes are provided. The syntax for the function is displayed in the light blue box. The boxes that are displayed correspond to the syntax of the function you selected. Replace works differently depending on the parameters provided. For our example we will use:
+For example, by selecting **Replace** from the dropdown list, more boxes are provided. The syntax for the function is displayed in the light blue box. The boxes that are displayed correspond to the syntax of the function you selected. Replace works differently depending on the parameters provided.
-- When oldValue and replacementValue are provided:
- - Replaces all occurrences of oldValue in the source with replacementValue
+For this example, when **oldValue** and **replacementValue** are provided, all occurrences of **oldValue** are replaced in the source with **replacementValue**.
-For more information, see [Replace](reference-expressions.md#replace)
+For more information, see [Replace](reference-expressions.md#replace).
-The first thing we need to do is select the attribute that is the source for the replace function. In our example, we selected the **mail** attribute.
+The first thing you need to do is select the attribute that's the source for the replace function. In this example, the **mail** attribute is selected.
-Next, we fill in the value for oldValue. This oldValue will be **@fabrikam.com**. Finally, in the box for replacementValue, we will fill in the value **@contoso.com**.
-
-So our expression, basically says, replace the mail attribute on user objects that have a value of @fabrikam.com with the @contoso.com value. By clicking the **Add expression** button, we can see the syntax in the **Expression input**
+Next, find the box for **oldValue** and enter **@fabrikam.com**. Finally, in the box for **replacementValue**, fill in the value **@contoso.com**.
+The expression basically says, replace the mail attribute on user objects that have a value of @fabrikam.com with the @contoso.com value. When you select **Add expression**, you can see the syntax in the **Expression input** box.
>[!NOTE]
->Be sure to place the values in the boxes that would correspond with oldValue and replacementValue based on the syntax that occurs when you have selected Replace.
+>Be sure to place the values in the boxes that would correspond with **oldValue** and **replacementValue** based on the syntax that occurs when you've selected **Replace**.
-For more information on supported expressions, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md)
+For more information on supported expressions, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md).
### Information on expression builder input boxes
-Depending on which function you have selected, the boxes provided by expression builder, will accept multiple values. For example, the JOIN function will accept strings or the value that is associated with a given attribute. For example, we can use the value contained in the attribute value of [givenName] and join this with a string value of "@contoso.com" to create an email address.
+Depending on which function you selected, the boxes provided by the expression builder will accept multiple values. For example, the JOIN function will accept strings or the value that's associated with a given attribute. For example, we can use the value contained in the attribute value of **[givenName]** and join it with a string value of **@contoso.com** to create an email address.
- ![Input box values](media/how-to-expression-builder/expression-8.png)
+ ![Screenshot that shows input box values.](media/how-to-expression-builder/expression-8.png)
For more information on acceptable values and how to write expressions, see [Writing expressions for attribute mappings in Azure Active Directory](reference-expressions.md). ## Test an expression
-In this section, you can test your expressions. From the drop-down, select the **mail** attribute. Fill in the value with **@fabrikam.com** and now click **Test expression**.
+In this section, you can test your expressions. From the dropdown list, select the **mail** attribute. Fill in the value with **@fabrikam.com**, and select **Test expression**.
-You will see the value of **@contoso.com** displayed in the **View expression output** box.
+The value **@contoso.com** appears in the **View expression output** box.
- ![Test your expression](media/how-to-expression-builder/expression-4.png)
+ ![Screenshot that shows testing your expression.](media/how-to-expression-builder/expression-4.png)
## Deploy the expression
-Once you are satisfied with the expression, simply click the **Apply expression** button.
-![Add your expression](media/how-to-expression-builder/expression-5.png)
+After you're satisfied with the expression, select **Apply expression**.
+
+![Screenshot that shows adding your expression.](media/how-to-expression-builder/expression-5.png)
-This will add the expression to the agent configuration.
-![Agent configuration](media/how-to-expression-builder/expression-6.png)
+This action adds the expression to the agent configuration.
-## Setting a NULL value on an expression
-To set an attributes value to NULL. You can use an expression with the value of `""`. This will flow the NULL value to the target attribute.
+![Screenshot that shows agent configuration.](media/how-to-expression-builder/expression-6.png)
-![NULL value](media/how-to-expression-builder/expression-7.png)
+## Set a NULL value on an expression
+To set an attribute's value to NULL, use an expression with the value of `""`. This expression will flow the NULL value to the target attribute.
+![Screenshot that shows a NULL value.](media/how-to-expression-builder/expression-7.png)
## Next steps
active-directory How To Install Pshell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-install-pshell.md
Title: 'Install the Azure AD Connect cloud provisioning agent using a command-line interface (CLI) and PowerShell'
-description: Learn how to install the Azure AD Connect cloud provisioning agent using PowerShell cmdlets.
+description: Learn how to install the Azure AD Connect cloud provisioning agent by using PowerShell cmdlets.
-# Install the Azure AD Connect provisioning agent using a command-line interface (CLI) and PowerShell
-The following document will guide show you how to install the Azure AD Connect provisioning agent using PowerShell cmdlets.
+# Install the Azure AD Connect provisioning agent by using a CLI and PowerShell
+This article shows you how to install the Azure Active Directory (Azure AD) Connect provisioning agent by using PowerShell cmdlets.
>[!NOTE]
->This document deals with installing the provisioning agent using the command-line interface. For information on installing the Azure AD Connect provisioing agent using the wizard, see [Install the Azure AD Connect provisioning agent](how-to-install.md).
+>This article deals with installing the provisioning agent by using the command-line interface (CLI). For information on how to install the Azure AD Connect provisioning agent by using the wizard, see [Install the Azure AD Connect provisioning agent](how-to-install.md).
-## Prerequisite:
+## Prerequisite
+The Windows server must have TLS 1.2 enabled before you install the Azure AD Connect provisioning agent by using PowerShell cmdlets. To enable TLS 1.2, follow the steps in [Prerequisites for Azure AD Connect cloud sync](how-to-prerequisites.md#tls-requirements).
>[!IMPORTANT]
->The following installation instructions assume that all of the [Prerequisites](how-to-prerequisites.md) have been met.
->
-> The windows server needs to have TLS 1.2 enabled before you install the Azure AD Connect provisioning agent using PowerShell cmdlets. To enable TLS 1.2 you can use the steps found [here](how-to-prerequisites.md#tls-requirements).
-
-
-
-## Install the Azure AD Connect provisioning agent using PowerShell cmdlets
+>The following installation instructions assume that all the [prerequisites](how-to-prerequisites.md) were met.
+## Install the Azure AD Connect provisioning agent by using PowerShell cmdlets
1. Sign in to the Azure portal, and then go to **Azure Active Directory**.
- 2. In the left menu, select **Azure AD Connect**.
- 3. Select **Manage provisioning (preview)** > **Review all agents**.
- 4. Download the Azure AD Connect provisioning agent from the Azure portal to a locally.
-
- ![Download on-premises agent](media/how-to-install/install-9.png)</br>
- 5. For purposes of these instructions, the agent was downloaded to the following folder: ΓÇ£C:\ProvisioningSetupΓÇ¥ folder.
- 6. Install ProvisioningAgent in quiet mode
-
- ```
- $installerProcess = Start-Process c:\temp\AADConnectProvisioningAgent.Installer.exe /quiet -NoNewWindow -PassThru
- $installerProcess.WaitForExit()
- ```
- 7. Import Provisioning Agent PS module
-
- ```
- Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.PowerShell.dll"
- ```
- 8. Connect to AzureAD using global administrator credentials, you can customize this section to fetch password from a secure store.
-
- ```
- $globalAdminPassword = ConvertTo-SecureString -String "Global admin password" -AsPlainText -Force
-
- $globalAdminCreds = New-Object System.Management.Automation.PSCredential -ArgumentList ("GlobalAdmin@contoso.onmicrosoft.com", $globalAdminPassword)
- ```
-
- Connect-AADCloudSyncAzureAD -Credential $globalAdminCreds
-
- 9. Add the gMSA account, provide credentials of the domain admin to create default gMSA account
+ 1. In the menu on the left, select **Azure AD Connect**.
+ 1. Select **Manage provisioning (preview)** > **Review all agents**.
+ 1. Download the Azure AD Connect provisioning agent from the Azure portal.
+
+ ![Screenshot that shows downloading the on-premises agent.](media/how-to-install/install-9.png)</br>
+
+ 1. For the purposes of these instructions, the agent was downloaded to the C:\ProvisioningSetup folder.
+ 1. Install ProvisioningAgent in quiet mode.
+
+ ```
+ $installerProcess = Start-Process c:\temp\AADConnectProvisioningAgent.Installer.exe /quiet -NoNewWindow -PassThru
+ $installerProcess.WaitForExit()
+ ```
+ 1. Import the Provisioning Agent PS module.
+
+ ```
+ Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.PowerShell.dll"
+ ```
+ 1. Connect to Azure AD by using global administrator credentials. You can customize this section to fetch a password from a secure store.
+
+ ```
+ $globalAdminPassword = ConvertTo-SecureString -String "Global admin password" -AsPlainText -Force
+
+ $globalAdminCreds = New-Object System.Management.Automation.PSCredential -ArgumentList ("GlobalAdmin@contoso.onmicrosoft.com", $globalAdminPassword)
+
+ Connect-AADCloudSyncAzureAD -Credential $globalAdminCreds
+ ```
+ 1. Add the gMSA account, and provide credentials of the domain admin to create the default gMSA account.
- ```
- $domainAdminPassword = ConvertTo-SecureString -String "Domain admin password" -AsPlainText -Force
-
- $domainAdminCreds = New-Object System.Management.Automation.PSCredential -ArgumentList ("DomainName\DomainAdminAccountName", $domainAdminPassword)
-
- Add-AADCloudSyncGMSA -Credential $domainAdminCreds
- ```
- 10. Or use the above cmdlet as below to provide a pre-created gMSA account
-
+ ```
+ $domainAdminPassword = ConvertTo-SecureString -String "Domain admin password" -AsPlainText -Force
+
+ $domainAdminCreds = New-Object System.Management.Automation.PSCredential -ArgumentList ("DomainName\DomainAdminAccountName", $domainAdminPassword)
+
+ Add-AADCloudSyncGMSA -Credential $domainAdminCreds
+ ```
+ 1. Or use the preceding cmdlet to provide a pre-created gMSA account.
- ```
- Add-AADCloudSyncGMSA -CustomGMSAName preCreatedGMSAName$
- ```
- 11. Add domain
-
- ```
- $contosoDomainAdminPassword = ConvertTo-SecureString -String "Domain admin password" -AsPlainText -Force
-
- $contosoDomainAdminCreds = New-Object System.Management.Automation.PSCredential -ArgumentList ("DomainName\DomainAdminAccountName", $contosoDomainAdminPassword)
-
- Add-AADCloudSyncADDomain -DomainName contoso.com -Credential $contosoDomainAdminCreds
- ```
- 12. Or use the above cmdlet as below to configure preferred domain controllers
-
- ```
- $preferredDCs = @("PreferredDC1", "PreferredDC2", "PreferredDC3")
-
- Add-AADCloudSyncADDomain -DomainName contoso.com -Credential $contosoDomainAdminCreds -PreferredDomainControllers $preferredDCs
- ```
- 13. Repeat the previous step to add more domains, please provide the account names and domain names of the respective domains
- 14. Restart the service
- ```
- Restart-Service -Name AADConnectProvisioningAgent
- ```
- 15. Go to the Azure portal to create the cloud sync configuration.
+ ```
+ Add-AADCloudSyncGMSA -CustomGMSAName preCreatedGMSAName$
+ ```
+ 1. Add the domain.
+
+ ```
+ $contosoDomainAdminPassword = ConvertTo-SecureString -String "Domain admin password" -AsPlainText -Force
+
+ $contosoDomainAdminCreds = New-Object System.Management.Automation.PSCredential -ArgumentList ("DomainName\DomainAdminAccountName", $contosoDomainAdminPassword)
+
+ Add-AADCloudSyncADDomain -DomainName contoso.com -Credential $contosoDomainAdminCreds
+ ```
+ 1. Or use the preceding cmdlet to configure preferred domain controllers.
+
+ ```
+ $preferredDCs = @("PreferredDC1", "PreferredDC2", "PreferredDC3")
+
+ Add-AADCloudSyncADDomain -DomainName contoso.com -Credential $contosoDomainAdminCreds -PreferredDomainControllers $preferredDCs
+ ```
+ 1. Repeat the previous step to add more domains. Provide the account names and domain names of the respective domains.
+
+ 1. Restart the service.
+
+ ```
+ Restart-Service -Name AADConnectProvisioningAgent
+ ```
+ 1. Go to the Azure portal to create the cloud sync configuration.
## Provisioning agent gMSA PowerShell cmdlets
-Now that you have installed the agent, you can apply more granular permissions to the gMSA. See [Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets](how-to-gmsa-cmdlets.md) for information and step-by-step instructions on configuring the permissions.
+Now that you've installed the agent, you can apply more granular permissions to the gMSA. For information and step-by-step instructions on how to configure the permissions, see [Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets](how-to-gmsa-cmdlets.md).
## Next steps
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-install.md
# Install the Azure AD Connect provisioning agent
-This document walks you through the installation process for the Azure Active Directory (Azure AD) Connect provisioning agent and how to initially configure it in the Azure portal.
+This article walks you through the installation process for the Azure Active Directory (Azure AD) Connect provisioning agent and how to initially configure it in the Azure portal.
>[!IMPORTANT]
->The following installation instructions assume that all of the [Prerequisites](how-to-prerequisites.md) have been met.
-
-Installing and configuring the Azure AD Connect cloud sync is accomplished in the following steps:
--- [Group Managed Service Accounts](#group-managed-service-accounts) -- [Install the agent](#install-the-agent)-- [Verify agent installation](#verify-agent-installation)
+>The following installation instructions assume that all the [prerequisites](how-to-prerequisites.md) were met.
>[!NOTE]
->This document deals with installing the provisioning agent using the wizard. For information on installing the Azure AD Connect provisioing agent using a command-line interface (CLI), see [Install the Azure AD Connect provisioning agent using a command-line interface (CLI) and powershell](how-to-install-pshell.md).
+>This article deals with installing the provisioning agent by using the wizard. For information on installing the Azure AD Connect provisioning agent by using a command-line interface (CLI), see [Install the Azure AD Connect provisioning agent by using a CLI and PowerShell](how-to-install-pshell.md).
## Group Managed Service Accounts
-A group Managed Service Account is a managed domain account that provides automatic password management, simplified service principal name (SPN) management,the ability to delegate the management to other administrators, and also extends this functionality over multiple servers. Azure AD Connect Cloud Sync supports and recommends the use of a group Managed Service Account for running the agent. For more information on a gMSA, see [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)
--
-### Upgrading an existing agent to use the gMSA account
-To upgrade an existing agent to use the gMSA account created during installation, simply update the agent service to the latest version by running the AADConnectProvisioningAgent.msi. This will upgrade the service to the latest version. Now run through the installation wizard again and provide the credentials to create the account when prompted.
--
+A group Managed Service Account (gMSA) is a managed domain account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators. It also extends this functionality over multiple servers. Azure AD Connect cloud sync supports and recommends the use of a group Managed Service Account for running the agent. For more information on a group Managed Service Account, see [Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview).
+### Upgrade an existing agent to use the gMSA
+To upgrade an existing agent to use the group Managed Service Account created during installation, update the agent service to the latest version by running AADConnectProvisioningAgent.msi. Now run through the installation wizard again and provide the credentials to create the account when prompted.
## Install the agent
-To install the agent, follow these steps.
+
+To install the agent:
1. Sign in to the server you'll use with enterprise admin permissions.
- 2. Sign in to the Azure portal, and then go to **Azure Active Directory**.
- 3. In the left menu, select **Azure AD Connect**.
- 4. Select **Manage cloud sync** > **Review all agents**.
- 5. Download the Azure AD Connect provisioning agent from the Azure portal.
- ![Download on-premises agent](media/how-to-install/install-9.png)</br>
- 6. Accept the terms and click download.
- 7. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
- 8. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms and select **Install**.
- ![Microsoft Azure AD Connect Provisioning Agent Package screen](media/how-to-install/install-1.png)</br>
- 9. After this operation finishes, the configuration wizard starts. Sign in with your Azure AD global administrator account.
- 10. On the **Configure Service Account screen** select either **create gMSA** or **Use custom gMSA**. If you allow the agent to create the account it will be named provAgentgMSA$. If you specify **Use custom gMSA** you will be prompted to provide this account.
- 11. Enter the domain admin credentials to create the group Managed Service account that will be used to run the agent service. Click **Next**.
- ![Create gMSA](media/how-to-install/install-12.png)</br>
- 12. On the **Connect Active Directory** screen, select **Add Directory**. Then sign in with your Active Directory administrator account. This operation adds your on-premises directory.
- 13. Optionally, you can manage the preference of domain controllers the agent will use by selecting **Select domain controller priority** and ordering the list of domain controllers. Click **OK**.
- ![Order domain controllers](media/how-to-install/install-2a.png)</br>
- 14. Select **Next**.
- ![Connect Active Directory screen](media/how-to-install/install-3a.png)</br>
- 15. On the **Agent Installation** screen confirm settings and the account that will be created and click **Confirm**.
- ![Confirm settings](media/how-to-install/install-11.png)</br>
- 16. After this operation finishes, you should see **Your agent installation is complete.** Select **Exit**.
- ![Configuration complete screen](media/how-to-install/install-4a.png)</br>
- 17. If you still see the initial **Microsoft Azure AD Connect Provisioning Agent Package** screen, select **Close**.
+ 1. Sign in to the Azure portal, and then go to **Azure Active Directory**.
+ 1. On the menu on the left, select **Azure AD Connect**.
+ 1. Select **Manage cloud sync** > **Review all agents**.
+ 1. Download the Azure AD Connect provisioning agent from the Azure portal.
+
+ ![Screenshot that shows Download on-premises agent.](media/how-to-install/install-9.png)</br>
+ 1. Accept the terms and select **Download**.
+ 1. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
+ 1. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms and select **Install**.
+
+ ![Screenshot that shows the Microsoft Azure AD Connect Provisioning Agent Package screen.](media/how-to-install/install-1.png)</br>
+ 1. After this operation finishes, the configuration wizard starts. Sign in with your Azure AD global administrator account.
+ 1. On the **Configure Service Account** screen, select either **Create gMSA** or **Use custom gMSA**. If you allow the agent to create the account, it will be named provAgentgMSA$. If you specify **Use custom gMSA**, you're prompted to provide this account.
+ 1. Enter the domain admin credentials to create the group Managed Service account that will be used to run the agent service. Select **Next**.
+
+ ![Screenshot that shows the Create gMSA option.](media/how-to-install/install-12.png)</br>
+ 1. On the **Connect Active Directory** screen, select **Add Directory**. Then sign in with your Active Directory administrator account. This operation adds your on-premises directory.
+ 1. Optionally, you can manage the preference of domain controllers the agent will use by selecting the **Select domain controller priority** checkbox and ordering the list of domain controllers. Select **OK**.
+
+ ![Screenshot that shows ordering the domain controllers.](media/how-to-install/install-2a.png)</br>
+ 1. Select **Next**.
+
+ ![Screenshot that shows the Connect Active Directory screen.](media/how-to-install/install-3a.png)</br>
+ 1. On the **Agent installation** screen, confirm settings and the account that will be created and select **Confirm**.
+
+ ![Screenshot that shows the Confirm settings.](media/how-to-install/install-11.png)</br>
+ 1. After this operation finishes, you should see **Your agent installation is complete.** Select **Exit**.
+
+ ![Screenshot that shows the Configuration complete screen.](media/how-to-install/install-4a.png)</br>
+ 1. If you still see the initial **Microsoft Azure AD Connect Provisioning Agent Package** screen, select **Close**.
## Verify agent installation Agent verification occurs in the Azure portal and on the local server that's running the agent. ### Azure portal agent verification
-To verify the agent is being seen by Azure, follow these steps.
+To verify the agent is being seen by Azure:
1. Sign in to the Azure portal.
- 2. On the left, select **Azure Active Directory** > **Azure AD Connect**. In the center, select **Manage cloud sync**.
+ 1. On the left, select **Azure Active Directory** > **Azure AD Connect**. In the center, select **Manage cloud sync**.
- ![Azure portal](media/how-to-install/install-6.png)</br>
+ ![Screenshot that shows the Azure portal.](media/how-to-install/install-6.png)</br>
- 3. On the **Azure AD Connect cloud sync** screen, select **Review all agents**.
+ 1. On the **Azure AD Connect cloud sync** screen, select **Review all agents**.
- ![Review all agents option](media/how-to-install/install-7.png)</br>
+ ![Screenshot that shows the Review all agents option.](media/how-to-install/install-7.png)</br>
- 4. On the **On-premises provisioning agents** screen, you see the agents you installed. Verify that the agent in question is there and is marked *active*.
-
- ![On-premises provisioning agents screen](media/how-to-install/verify-1.png)</br>
-
+ 1. On the **On-premises provisioning agents** screen, you see the agents you installed. Verify that the agent in question is there and is marked *active*.
+ ![Screenshot that shows On-premises provisioning agents screen.](media/how-to-install/verify-1.png)</br>
### On the local server
-To verify that the agent is running, follow these steps.
+To verify that the agent is running:
-1. Sign in to the server with an administrator account.
-1. Open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
-1. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are there and their status is *Running*.
+1. Sign in to the server with an administrator account.
+1. Open **Services** by going to it or by selecting **Start** > **Run** > **Services.msc**.
+1. Under **Services**, make sure **Microsoft Azure AD Connect Agent Updater** and **Microsoft Azure AD Connect Provisioning Agent** are there and their status is *Running*.
- ![Services screen](media/how-to-install/troubleshoot-1.png)
+ ![Screenshot that shows the Services screen.](media/how-to-install/troubleshoot-1.png)
>[!IMPORTANT]
->The agent has been installed but it must be configured and enabled before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
---
+>The agent has been installed, but it must be configured and enabled before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
## Next steps
active-directory How To Map Usertype https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-map-usertype.md
Title: 'How to use map UserType with Azure AD Connect cloud sync'
-description: This article describes how to use map the UserType attribute with cloud sync.
+ Title: 'Use map UserType with Azure AD Connect cloud sync'
+description: This article describes how to map the UserType attribute with cloud sync.
# Map UserType with cloud sync
-Cloud sync supports synchronization of the UserType attribute for User objects.
+Cloud sync supports synchronization of the **UserType** attribute for User objects.
-By default, the UserType attribute is not enabled for synchronization because there is no corresponding UserType attribute in on-premises Active Directory. You must manually add this mapping for synchronization. Before doing this, you must take note of the following behavior enforced by Azure AD:
+By default, the **UserType** attribute isn't enabled for synchronization because there's no corresponding **UserType** attribute in on-premises Active Directory. You must manually add this mapping for synchronization. Before you do this step, you must take note of the following behavior enforced by Azure Active Directory (Azure AD):
-- Azure AD only accepts two values for the UserType attribute: Member and Guest.-- If the UserType attribute is not mapped in cloud sync, Azure AD users created through directory synchronization would have the UserType attribute set to Member.
+- Azure AD only accepts two values for the **UserType** attribute: Member and Guest.
+- If the **UserType** attribute isn't mapped in cloud sync, Azure AD users created through directory synchronization would have the **UserType** attribute set to Member.
-Before adding a mapping for the UserType attribute, you must first decide how the attribute is derived from on-premises Active Directory. The following are the most common approaches:
+Before you add a mapping for the **UserType** attribute, you must first decide how the attribute is derived from on-premises Active Directory. The following approaches are the most common:
+ - Designate an unused on-premises Active Directory attribute, such as extensionAttribute1, to be used as the source attribute. The designated on-premises Active Directory attribute should be of the type string, be single-valued, and contain the value Member or Guest.
+ - If you choose this approach, you must ensure that the designated attribute is populated with the correct value for all existing user objects in on-premises Active Directory that are synchronized to Azure AD before you enable synchronization of the **UserType** attribute.
-## To add the UserType mapping
-To add the UserType mapping, use the following steps.
+## Add the UserType mapping
+To add the **UserType** mapping:
- 1. In the Azure portal, select **Azure Active Directory**
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
- 4. Under **Configuration**, select your configuration.
- 5. Under **Manage attributes**, select **Click to edit mappings**.
- ![Edit the attribute mappings](media/how-to-map-usertype/usertype-1.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 1. Select **Azure AD Connect**.
+ 1. Select **Manage cloud sync**.
+ 1. Under **Configuration**, select your configuration.
+ 1. Under **Manage attributes**, select **Click to edit mappings**.
+
+ ![Screenshot that shows editing the attribute mappings.](media/how-to-map-usertype/usertype-1.png)
- 6. Click **Add attribute mapping**.
- ![Add a new attribute mapping](media/how-to-map-usertype/usertype-2.png)
-7. Select the mapping type. You can do the mapping in one of three ways:
- ![Add usertype](media/how-to-map-usertype/usertype-3.png)
-8. In the Target attribute dropdown, select UserType.
-9. Click the **Apply** button at the bottom of the page. This will create a mapping for the Azure AD UserType attribute.
+ 1. Select **Add attribute mapping**.
+
+ ![Screenshot that shows adding a new attribute mapping.](media/how-to-map-usertype/usertype-2.png)
+1. Select the mapping type. You can do the mapping in one of three ways:
+ - A direct mapping, for example, from an Active Directory attribute
+ - An expression, such as IIF(InStr([userPrincipalName], "@partners") > 0,"Guest","Member")
+ - A constant, for example, make all user objects as Guest
+
+ ![Screenshot that shows adding a UserType attribute.](media/how-to-map-usertype/usertype-3.png)
+
+1. In the **Target attribute** dropdown box, select **UserType**.
+1. Select **Apply** at the bottom of the page to create a mapping for the Azure AD **UserType** attribute.
## Next steps
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 05/18/2021 Last updated : 07/08/2021
This setting works with all browsers. However, to satisfy a device policy, like
| Windows Server 2008 R2 | Internet Explorer | | macOS | Chrome, Safari |
+These browsers support device authentication, allowing the device to be identified and validated against a policy. The device check fails if the browser is running in private mode or if cookies are disabled.
+ > [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario. > Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
For Chrome support in **Windows 8.1 and 7**, create the following registry key:
- Type REG_SZ (String) - Data {"pattern":"https://device.login.microsoftonline.com","filter":{"ISSUER":{"CN":"MS-Organization-Access"}}}
-These browsers support device authentication, allowing the device to be identified and validated against a policy. The device check fails if the browser is running in private mode.
- ### Supported mobile applications and desktop clients Organizations can select **Mobile apps and desktop clients** as client app.
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-create.md
This setting determines how often access reviews will occur.
1. Set the **Duration** to define how many days each review of the recurring series will be open for input from reviewers. For example, you might schedule an annual review that starts on January 1st and is open for review for 30 days so that reviewers have until the end of the month to respond.
-1. Next to **Reviewers**, select **Self-review** if you want users to perform their own access review or select **Specific reviewer(s)** if you want to designate a reviewer. You can also select **Manager** if you want to designate the revieweeΓÇÖs manager to be the reviewer. If you select this option, you need to add a **fallback** to forward the review to in case the manager cannot be found in the system.
+1. Next to **Reviewers**, select **Self-review** if you want users to perform their own access review or select **Specific reviewer(s)** if you want to designate a reviewer. You can also select **Manager** if you want to designate the revieweeΓÇÖs manager to be the reviewer. If you select this option, you need to add a **fallback** to forward the review to in case the manager cannot be found in the system.
+1. If you selected **Specific reviewer(s)**, specify which users will do the access review:
![Select Add reviewers](./media/entitlement-management-access-reviews/access-reviews-add-reviewer.png)
-1. If you selected **Specific reviewer(s)**, specify which users will do the access review:
1. Select **Add reviewers**. 1. In the **Select reviewers** pane, search for and select the user(s) you want to be a reviewer. 1. When you've selected your reviewer(s), click the **Select** button. ![Specify the reviewers](./media/entitlement-management-access-reviews/access-reviews-select-reviewer.png)
-1. If you selectedΓÇ»**Manager (Preview)**, specify the fallback reviewer:
+1. If you selectedΓÇ»**Manager**, specify the fallback reviewer:
1. Select **Add fallback reviewers**. 1. In the Select fallback reviewers pane, search for and select the user(s) you want to be fallback reviewer(s) for the reviewee’s manager. 1. When you've selected your fallback reviewer(s), click the **Select** button.
- ![Add the fallback reviewers](./media/entitlement-management-access-reviews/access-reviews-add-fallback-manager.png)
+ ![Add the fallback reviewers](./media/entitlement-management-access-reviews/access-reviews-select-manager.png)
1. Click **Review + Create** if you are creating a new access package or **Update** if you are editing an access package, at the bottom of the page.-
+
> [!NOTE]
-> In Azure AD Entitlement Management, the result of an access package review is always auto-applied to the users assigned to the package, according to the setting selected in **If reviewers donΓÇÖt respond**. When the review setting of **If reviewers donΓÇÖt respond** is set to **No change**, this is equivalent to the system approving continued access for the users being reviewed.
+> In Azure AD Entitlement Management, the result of an access package review is always auto-applied to the users assigned to the package, according to the setting selected in **If reviewers donΓÇÖt respond**. When the review setting of **If reviewers donΓÇÖt respond** is set to **No change**, this is equivalent to the system approving continued access for the users being reviewed.
## View the status of the access review
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-collections.md
Title: Create collections for My Apps portals in Azure Active Directory | Microsoft Docs description: Use My Apps collections to Customize My Apps pages for a simpler My Apps experience for your end users. Organize applications into groups with separate tabs. -+ Last updated 02/10/2020-+
active-directory Access Panel Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-manage-self-service-access.md
Title: How to use self-service application access in Azure AD description: Enable self-service so users can find apps in Azure AD -+ Last updated 07/11/2017-+
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-assign-users.md
Title: 'Quickstart: Assign users to an app that uses Azure Active Directory as an identity provider' description: This quickstart walks through the process of allowing users to use an app that you have setup to use Azure AD as an identity provider. -+ Last updated 09/01/2020-+ # Quickstart: Assign users to an app that is using Azure AD as an identity provider
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-configure.md
Title: 'Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant' description: This quickstart uses the Azure portal to configure an application that has been registered with your Azure Active Directory (Azure AD) tenant. -+ Last updated 10/29/2019-+ # Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
Title: 'Quickstart: Set up OIDC-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant' description: This quickstart walks through the process of setting up OIDC-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant. -+ Last updated 07/01/2020-+ # Quickstart: Set up OIDC-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
Title: 'Quickstart: Set up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant' description: This quickstart walks through the process of setting up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant. -+ Last updated 07/01/2020-+ # Quickstart: Set up SAML-based single sign-on (SSO) for an application in your Azure Active Directory (Azure AD) tenant
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/add-application-portal.md
Title: 'Quickstart: Add an application to your Azure Active Directory (Azure AD) tenant' description: This quickstart uses the Azure portal to add a gallery application to your Azure Active Directory (Azure AD) tenant. -+ Last updated 06/23/2021-+ # Quickstart: Add an application to your Azure Active Directory (Azure AD) tenant
To add an application to your Azure AD tenant, you need:
To add an application to your Azure AD tenant:
-1. In the [Azure portal](https://portal.azure.com), on the left navigation panel, select **Azure Active Directory**.
-2. In the **Azure Active Directory** pane, select **Enterprise applications**. The **All applications** pane opens and displays a random sample of the applications in your Azure AD tenant.
-3. In the **Enterprise applications** pane, select **New application**.
- ![Select New application to add a gallery app to your tenant](media/add-application-portal/new-application.png)
-4. Switch to the gallery experience: In the banner at the top of the **Add an application page**, select the link that says **Click here to try out the new and improved app gallery**.
-5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning.
+1. In the [Azure portal](https://portal.azure.com), in the **Azure services** pane select **Enterprise applications**. The **All applications** pane opens and displays a random sample of the applications in your Azure AD tenant.
+2. In the **Enterprise applications** pane, select **New application**.
+3. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning.
+4. Switch back to the legacy app galley experience: In the banner at the top of the **Add an application page**, select the link that says **You're in the new and improved app gallery experience. Click here to switch back to the legacy app gallery experience**.
![Search for an app by name or category](media/add-application-portal/browse-gallery.png)
-6. You can browse the gallery for the application you want to add, or search for the application by entering its name in the search box. Then select the application from the results.
-7. The next step depends on the way the developer of the application implemented single sign-on (SSO). Single sign-on can be implemented by app developers in four ways. The four ways are SAML, OpenID Connect, Password, and Linked. When you add an app, you can choose to filter and see only apps using a particular SSO implementation as shown in the screenshot. For example, a popular standard to implement SSO is called Security Assertion Markup Language (SAML). Another standard that is popular is called OpenId Connect (OIDC). The way you configure SSO with these standards is different so take note of the type of SSO that is implemented by the app that you are adding.
-
- :::image type="content" source="media/add-application-portal/sso-types.png" alt-text="Screenshot shows the SSO types selector." lightbox="media/add-application-portal/sso-types.png":::
+5. You can browse the gallery for the application you want to add, or search for the application by entering its name in the search box. Then select the application from the results.
+6. The next step depends on the way the developer of the application implemented single sign-on (SSO). Single sign-on can be implemented by app developers in four ways. The four ways are SAML, OpenID Connect, Password, and Linked. When you add an app, you can choose to filter and see only apps using a particular SSO implementation as shown in the screenshot. For example, a popular standard to implement SSO is called Security Assertion Markup Language (SAML). Another standard that is popular is called OpenId Connect (OIDC). The way you configure SSO with these standards is different so take note of the type of SSO that is implemented by the app that you are adding.
- If the developer of the app used the **OIDC standard** for SSO then select **Sign Up**. A setup page appears. Next, go to the quickstart on setting up OIDC-based single sign-on. :::image type="content" source="media/add-application-portal/sign-up-oidc-sso.png" alt-text="Screenshot shows adding an OIDC-based SSO app.":::
To add an application to your Azure AD tenant:
- If the developer of the app used the **SAML standard** for SSO then select **Create**. A getting started page appears with the options for configuring the application for your organization. In the form, you can edit the name of the application to match the needs of your organization. Next, go to the quickstart on setting up SAML-based single sign-on. :::image type="content" source="media/add-application-portal/create-application.png" alt-text="Screenshot shows adding an SAML-based SSO app."::: - > [!IMPORTANT] > There are some key differences between SAML-based and OIDC-based SSO implementations. With SAML-based apps you can add multiple instances of the same app. For example, GitHub1, GitHub2, etc.. For OIDC-based apps you can only add one instance of an app. If you have already added an OIDC-based app and try to add the same app again and provide consent twice, it will not be added again in the tenant.
If you're not going to continue with the quickstart series, then consider deleti
Advance to the next article to learn how to configure an app. > [!div class="nextstepaction"]
-> [Configure an app](add-application-portal-configure.md)
+> [Configure an app](add-application-portal-configure.md)
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/app-management-powershell-samples.md
Title: PowerShell samples for Azure Active Directory Application Management description: These PowerShell samples are used for apps you manage in your Azure Active Directory tenant. You can use these sample scripts to find expiration information about secrets and certificates. -+ Last updated 02/18/2021-+
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-certs-faq.md
Title: Azure Active Directory Application Management certificates frequently asked questions description: Learn answers to frequently asked questions (FAQ) about managing certificates for apps using Azure Active Directory as an Identity Provider (IdP). -+ Last updated 03/19/2021-+
active-directory Application Management Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-management-fundamentals.md
Title: 'Application management: Best practices and recommendations | Microsoft D
description: Learn best practices and recommendations for managing applications in Azure Active Directory. Learn about using automatic provisioning and publishing on-premises apps with Application Proxy. -+ ms.assetid:
na
Last updated 11/13/2019 -+
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
Title: Troubleshoot problems signing in to an application from Azure AD My Apps description: Troubleshoot problems signing in to an application from Azure AD My Apps -+ Last updated 07/11/2017-+
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Title: Error message appears on app page after you sign in | Microsoft Docs description: How to resolve issues with Azure AD sign in when the app returns an error message. -+ Last updated 07/11/2017-+
active-directory Application Sign In Problem First Party Microsoft https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
Title: Problems signing in to a Microsoft application | Microsoft Docs description: Troubleshoot common problems faced when signing in to first-party Microsoft Applications using Azure AD (like Microsoft 365). -+ Last updated 09/10/2018-+
active-directory Application Sign In Unexpected User Consent Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
Title: Unexpected error when performing consent to an application | Microsoft Docs description: Discusses errors that can occur during the process of consenting to an application and what you can do about them -+ Last updated 07/11/2017-+
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Title: Unexpected consent prompt when signing in to an application | Microsoft Docs description: How to troubleshoot when a user sees a consent prompt for an application you have integrated with Azure AD that you did not expect -+ Last updated 07/11/2017-+
active-directory Application Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-types.md
Title: Viewing apps using your Azure Active Directory tenant for identity management description: Understand how to view all applications using your Azure Active Directory tenant for identity management. -+ Last updated 01/07/2021-+ # Viewing apps using your Azure AD tenant for identity management
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Title: Manage user assignment for an app in Azure Active Directory description: Learn how to assign and unassign users, and groups, for an app using Azure Active Directory for identity management. -+ Last updated 02/21/2020-+
active-directory Certificate Signing Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/certificate-signing-options.md
Title: Advanced SAML token certificate signing options for Azure AD apps description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory -+ Last updated 03/25/2019-+
active-directory Cloud App Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/cloud-app-security.md
Title: App visibility and control with Microsoft Cloud App Security description: Learn ways to identify app risk levels, stop breaches and leaks in real time, and use app connectors to take advantage of provider APIs for visibility and governance. -+ Last updated 02/03/2020-+
active-directory Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/common-scenarios.md
Title: Common application management scenarios for Azure Active Directory | Microsoft Docs description: Centralize application management with Azure AD-+ Last updated 03/02/2019-+
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Title: Configure the admin consent workflow - Azure Active Directory | Microsoft Docs description: Learn how to configure a way for end users to request access to applications that require admin consent. -+ Last updated 10/29/2019-+
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Title: Configure sign-in auto-acceleration using Home Realm Discovery description: Learn how to configure Home Realm Discovery policy for Azure Active Directory authentication for federated users, including auto-acceleration and domain hints. -+ Last updated 02/12/2021-+
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-linked-sign-on.md
Title: Understand linked sign-on in Azure Active Directory description: Understand linked sign-on in Azure Active Directory. -+ Last updated 07/30/2020-+
active-directory Configure Oidc Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-oidc-single-sign-on.md
Title: Understand OIDC-based single sign-on (SSO) for apps in Azure Active Directory description: Understand OIDC-based single sign-on (SSO) for apps in Azure Active Directory. -+ Last updated 10/19/2020-+
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
Title: Understand password-based single sign-on (SSO) for apps in Azure Active Directory description: Understand password-based single sign-on (SSO) for apps in Azure Active Directory -+ Last updated 07/29/2020-+ # Understand password-based single sign-on
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-permission-classifications.md
Title: Configure permission classifications with Azure AD description: Learn how to manage delegated permission classifications. -+ Last updated 06/01/2020-+
active-directory Configure Saml Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-saml-single-sign-on.md
Title: Understand SAML-based single sign-on (SSO) for apps in Azure Active Directory description: Understand SAML-based single sign-on (SSO) for apps in Azure Active Directory -+ Last updated 07/28/2020-+
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent-groups.md
Title: Configure group owner consent to apps accessing group data using Azure AD description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data. -+ Last updated 05/19/2020-+
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent.md
Title: Configure how end-users consent to applications using Azure AD description: Learn how to manage how and when users can consent to applications that will have access to your organization's data. -+ Last updated 06/01/2021-+
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/debug-saml-sso-issues.md
Title: Debug SAML-based single sign-on - Azure Active Directory description: Debug SAML-based single sign-on to applications in Azure Active Directory. --++
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
Title: 'Quickstart: Delete an application from your Azure Active Directory (Azure AD) tenant' description: This quickstart uses the Azure portal to delete an application from your Azure Active Directory (Azure AD) tenant. -+ Last updated 1/5/2021-+ # Quickstart: Delete an application from your Azure Active Directory (Azure AD) tenant
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Title: Disable user sign-ins for an enterprise app in Azure AD description: How to disable an enterprise application so that no users may sign in to it in Azure Active Directory -+ Last updated 04/12/2019-+
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/end-user-experiences.md
Title: End-user experiences for applications - Azure Active Directory description: Azure Active Directory (Azure AD) provides several customizable ways to deploy applications to end users in your organization. -+ Last updated 09/27/2019-+
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Azure AD secure hybrid access with F5 | Microsoft Docs description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access -+ Last updated 11/12/2020-+
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Title: Azure AD secure hybrid access with F5 VPN| Microsoft Docs description: Tutorial for Azure Active Directory Single Sign-on (SSO) integration with F5 BIG-IP for Password-less VPN -+ Last updated 10/12/2020-+
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
Title: Azure AD secure hybrid access with F5 deployment guide | Microsoft Docs description: Tutorial to deploy F5 BIG-IP Virtual Edition (VE) VM in Azure IaaS for Secure hybrid access -+ Last updated 10/12/2020-+
active-directory Get It Now Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/get-it-now-azure-marketplace.md
Title: 'Add an app from the Azure Marketplace' description: This article acts as a landing page from the Get It Now button on the Azure Marketplace. -+ Last updated 07/16/2020-+
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/grant-admin-consent.md
Title: Grant tenant-wide admin consent to an application - Azure AD description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application. -+ Last updated 11/04/2019-+
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/hide-application-from-user-portal.md
Title: Hide an Enterprise application from user's experience in Azure AD description: How to hide an Enterprise application from user's experience in Azure Active Directory access panels or Microsoft 365 launchers. -+ Last updated 03/25/2020-+
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Title: SAML token encryption in Azure Active Directory description: Learn how to configure Azure Active Directory SAML token encryption. -+ Last updated 03/13/2020-+
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-app-consent-policies.md
Title: Manage app consent policies in Azure AD description: Learn how to manage built-in and custom app consent policies to control when consent can be granted. -+ Last updated 06/01/2020-+
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-application-permissions.md
Title: Manage user and admin permissions - Azure Active Directory | Microsoft Docs description: Learn how to review and manage permissions for the application on Azure AD. For example, revoke all permissions granted to an application. -+ Last updated 7/10/2020-+
active-directory Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-certificates-for-federated-single-sign-on.md
Title: Manage federation certificates in Azure AD | Microsoft Docs description: Learn how to customize the expiration date for your federation certificates, and how to renew certificates that will soon expire. -+ Last updated 04/04/2019-+
active-directory Manage Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-consent-requests.md
Title: Managing consent to applications and evaluating consent requests in Azure Active Directory description: Learn how to manage consent requests when user consent is disabled or restricted, and how to evaluate a request for tenant-wide admin consent to an application in Azure Active Directory. -+ Last updated 12/27/2019-+ # Managing consent to applications and evaluating consent requests
-Microsoft [recommends](../../security/fundamentals/steps-secure-identity.md#restrict-user-consent-operations) disabling end-user consent to applications. This will centralize the decision-making process with your organization's security and identity administrator team.
+Microsoft [restricting user consent](../../active-directory/manage-apps/configure-user-consent.md) to allow users to consent for only for app from verified publishers, and only for permissions you select. For apps which do not meet this policy, the decision-making process will be centralized with your organization's security and identity administrator team.
After end-user consent is disabled or restricted, there are several important considerations to ensure your organization stays secure while still allowing business critical applications to be used. These steps are crucial to minimize impact on your organization's support team and IT administrators, while preventing the use of unmanaged accounts in third-party applications.
Users' access to applications can still be limited even when tenant-wide admin c
For more a broader overview including how to handle additional complex scenarios, see [using Azure AD for application access management](what-is-access-management.md).
-## Disable all future user consent operations to any application
-Disabling user consent for your entire directory prevent end users from consenting to any application. Administrators can still consent on userΓÇÖs behalf. To learn more about application consent, and why you may or may not want to consent, read [Understanding user and admin consent](../develop/howto-convert-app-to-be-multi-tenant.md).
-
-To disable all future user consent operations in your entire directory, follow these steps:
-1. Open the [**Azure portal**](https://portal.azure.com/) and sign in as a **Global Administrator.**
-2. Open the **Azure Active Directory Extension** by clicking **All services** at the top of the main left-hand navigation menu.
-3. Type in **ΓÇ£Azure Active Directory**ΓÇ¥ in the filter search box and select the **Azure Active Directory** item.
-4. Select **Users and groups** in the navigation menu.
-5. Select **User settings**.
-6. Disable all future user consent operations by setting the **Users can allow apps to access their data** toggle to **No** and click the **Save** button.
- ## Next steps * [Five steps to securing your identity infrastructure](../../security/fundamentals/steps-secure-identity.md#before-you-begin-protect-privileged-accounts-with-mfa) * [Configure the admin consent workflow](configure-admin-consent-workflow.md)
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/manage-self-service-access.md
Title: How to configure self-service application assignment | Microsoft Docs description: Enable self-service application access to allow users to find their own applications -+ Last updated 04/20/2020-+
active-directory Methods For Removing User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/methods-for-removing-user-access.md
Title: How to remove a user's access to an application in Azure Active Directory description: Understand how to remove a user's access to an application in Azure Active Directory -+ Last updated 11/02/2020-+ # How to remove a user's access to an application
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
Title: Use the activity report to move AD FS apps to Azure Active Directory | Microsoft Docs' description: The Active Directory Federation Services (AD FS) application activity report lets you quickly migrate applications from AD FS to Azure Active Directory (Azure AD). This migration tool for AD FS identifies compatibility with Azure AD and gives migration guidance. -+ Last updated 01/14/2019-+
active-directory Migrate Adfs Apps To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
Title: Moving application authentication from AD FS to Azure Active Directory description: Learn how to use Azure Active Directory to replace Active Directory Federation Services (AD FS), giving users single sign-on to all their applications. -+ Last updated 03/01/2021-+
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Title: 'Migrate application authentication to Azure Active Directory' description: This whitepaper details the planning for and benefits of migrating your application authentication to Azure AD. -+ Last updated 02/05/2021-+
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migration-resources.md
Title: Resources for migrating apps to Azure Active Directory | Microsoft Docs description: Resources to help you migrate application access and authentication to Azure Active Directory (Azure AD). -+ Last updated 02/29/2020-+
active-directory My Apps Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/my-apps-deployment-plan.md
Title: Plan Azure Active Directory My Apps configuration description: Planning guide to effectively use My Apps in your organization. -+ Last updated 02/29/2020-+ # Plan Azure Active Directory My Apps configuration
active-directory One Click Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/one-click-sso-tutorial.md
Title: One-click, single sign-on (SSO) configuration of your Azure Marketplace application | Microsoft Docs description: Steps for one-click configuration of SSO for your application from the Azure Marketplace. -+ Last updated 06/11/2019-+
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-an-application-integration.md
Title: Get started integrating Azure Active Directory with apps description: This article is a getting started guide for integrating Azure Active Directory (AD) with on-premises applications, and cloud applications. -+ Last updated 04/05/2021-+
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/plan-sso-deployment.md
Title: Plan an Azure Active Directory single sign-on deployment description: Guide to help you plan, deploy, and manage SSO in your organization. -+ Last updated 06/10/2020-+
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Title: Prevent sign-in auto-acceleration in Azure AD using Home Realm Discovery policy description: Learn how to prevent domain_hint auto-acceleration to federated IDPs. -+ Last updated 02/12/2021-+
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Azure AD secure hybrid access | Microsoft Docs description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD. Secure your legacy apps by connecting app delivery controllers or networks to Azure AD. -+ Last updated 2/16/2021-+
active-directory Sso Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/sso-options.md
Title: Single sign-on options in Azure AD description: Learn about the options available for single sign-on (SSO) in Azure Active Directory. -+ Last updated 12/03/2019-+
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/tenant-restrictions.md
Title: Use tenant restrictions to manage access to SaaS apps - Azure AD description: How to use tenant restrictions to manage which users can access apps based on their Azure AD tenant. -+ Last updated 6/2/2021-+
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
Title: Troubleshoot password-based single sign-on in Azure Active Directory description: Troubleshoot issues with an Azure AD app that's configured for password-based single sign-on.-+ Last updated 07/11/2017-+
active-directory Troubleshoot Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
Title: Troubleshoot SAML-based single sign-on in Azure Active Directory description: Troubleshoot issues with an Azure AD app that's configured for SAML-based single sign-on. -+ Last updated 07/11/2017-+ # Troubleshoot SAML-based single sign-on in Azure Active Directory
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/view-applications-portal.md
Title: 'Quickstart: View the list of applications that are using your Azure Active Directory (Azure AD) tenant for identity management' description: In this Quickstart, use the Azure portal to view the list of applications that are registered to use your Azure Active Directory (Azure AD) tenant for identity management. -+ Last updated 04/09/2019-+
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
Title: Understand how users are assigned to apps in Azure Active Directory description: Understand how users get assigned to an app that is using Azure Active Directory for identity management. -+ Last updated 01/07/2021-+ # Understand how users are assigned to apps in Azure Active Directory
active-directory What Is Access Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-access-management.md
Title: Managing access to apps using Azure AD description: Describes how Azure Active Directory enables organizations to specify the apps to which each user has access. -+ Last updated 05/16/2017-+ # Managing access to apps
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-application-management.md
Title: What is application management in Azure Active Directory description: An overview of using Azure Active Directory (AD) as an Identity and Access Management (IAM) system for your cloud and on-premises applications. -+ Last updated 01/22/2021-+
active-directory What Is Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/what-is-single-sign-on.md
Title: What is Azure single sign-on (SSO)? description: Learn how single sign-on (SSO) works with Azure Active Directory. Use SSO so users don't need to remember passwords for every application. Also use SSO to simplify the administration of account management. -+ Last updated 12/03/2019-+
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/whats-new-docs.md
--++
active-directory Concept Sign In Diagnostics Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-sign-in-diagnostics-scenarios.md
++
+ Title: Sign in diagnostics for Azure AD scenarios
+description: Lists the scenarios that are supported by the sign-in diagnostics for Azure AD.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: e2b3d8ce-708a-46e4-b474-123792f35526
+
+ms.devlang: na
+
+ na
++ Last updated : 07/08/2021+++
+# Customer intent: As an Azure AD administrator, I want to know the scenarios that are supported by the sign in diagnostics for Azure AD so that I can determine whether the tool can help me with a sign-in issue.
+++
+# Sign in diagnostics for Azure AD scenarios
+
+You can use the sign-in diagnostic for Azure AD to analyze what happened during a sign-in attempt and get recommendations for resolving problems without needing to involve Microsoft support.
+
+This article gives you an overview of the types of scenarios you can identify and resolve when using this tool.
+
+## Supported scenarios
+
+The sign-in diagnostic for Azure AD provides you with support for the following scenarios:
+
+- **Conditional Access**
+
+ - Blocked by conditional access
+
+ - Failed conditional access
+
+ - Multifactor authentication (MFA) from conditional access
+
+ - B2B Blocked Sign-In Due to Conditional Access
+
+- **Multifactor Authentication (MFA)**
+
+ - MFA from other requirements
+
+ - MFA proof up required
+
+ - MFA proof up required (risky sign-in location)
+
+- **Correct & Incorrect Credentials**
+
+ - Successful sign-in
+
+ - Account locked
+
+ - Invalid username or password
+
+- **Enterprise Apps**
+
+ - Enterprise apps service provider
+
+ - Enterprise apps configuration
+
+- **Other Scenarios**
+
+ - Security defaults
+
+ - Error code insights
+
+ - Legacy authentication
+
+ - Blocked by risk policy
+++++++
+## Conditional access
++
+### Blocked by conditional access
+
+In this scenario, a sign-in attempt has been blocked by a conditional access policy.
++
+![Screenshot showing access configuration with Block access selected.](./media/concept-sign-in-diagnostics-scenarios/block-access.png)
+
+The diagnostic section for this scenario shows details about the user sign-in event and the applied policies.
+
+
+
+### Failed conditional access
+
+This scenario is typically a result of a sign-in attempt that failed because the requirements of a conditional access policy were not satisfied. Common examples are:
+++
+![Screenshot showing access configuration with common policy examples and Grant access selected.](./media/concept-sign-in-diagnostics-scenarios/require-controls.png)
+
+- Require hybrid Azure AD joined device
+
+- Require approved client app
+
+- Require app protection policy
+
+The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
+
+
+
+### MFA from conditional access
+
+In this scenario, a conditional access policy has the requirement to sign in using multifactor authentication set.
+++
+![Screenshot showing access configuration with Require multifactor authentication selected.](./media/concept-sign-in-diagnostics-scenarios/require-mfa.png)
+
+The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
+
+
+
+
+
+## Multifactor authentication
+
+### MFA from other requirements
+
+In this scenario, a multifactor authentication requirement wasn't enforced by a conditional access policy. For example, multifactor authentication on a per-user basis.
+++
+![Screenshot showing multifactor authentication per user configuration.](./media/concept-sign-in-diagnostics-scenarios/mfa-per-user.png)
+
+The intent of this diagnostic scenario is to provide more details about:
+
+- The source of the interrupted multifactor authentication
+
+- The result of the client interaction
+
+You can also view all details of the user sign-in attempt.
+
+
+
+### MFA proof up required
+
+In this scenario, sign-in attempts were interrupted by requests to set up multifactor authentication. This setup is also known as proof up.
+
+
+
+Multifactor authentication proof up occurs when a user is required to use multifactor authentication but has not configured it yet, or an administrator has required the user to configure it.
+
+
+
+The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up.
+
+
+
+### MFA proof up required (risky sign-in location)
+
+In this scenario, sign-in attempts were interrupted by a request to set up multifactor authentication from a risky sign-in location.
+
+
+
+The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up, specifically from a network location that doesn't appear risky.
+
+
+
+An example of this scenario is when policy requires that the user setup MFA only from trusted network locations but the user is signing in from an untrusted network location.
+
+
+
+## Correct & incorrect credential
+
+### Successful sign-in
+
+In this scenario, sign-in events are not interrupted by conditional access or multifactor authentication.
+
+
+
+This diagnostic scenario provides details about user sign-in events that are expected to be interrupted due to conditional access policies or multifactor authentication.
+
+
+
+### The account is locked
+
+In this scenario, a user signed-in with incorrect credentials too many times. This scenario happens when too many password-based sign-in attempts have occurred with incorrect credentials. The diagnostic scenario provides information for the admin to determine where the attempts are coming from and if they are legitimate user sign-in attempts or not.
+
+
+
+This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system, and the IP address.
+
+
+
+More information about this topic can be found in the Azure AD Smart Lockout documentation.
+
+
+
+
+
+### Invalid username or password
+
+In this scenario, a user tried to sign in using an invalid username or password. The diagnostic is intended to allow an administrator to determine if the problem is with a user entering incorrect credentials, or a client and/or application(s), which have cached an old password and are resubmitting it.
+
+
+
+This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
+
+
+
+## Enterprise app
+
+In enterprise applications, there are two points where problems may occur:
+
+- The identity provider (Azure AD) application configuration
+- The service provider (application service, also known as SaaS application) side
+
+
+
+Diagnostics for these problems address which side of the problem should be looked at for resolution and what to do.
+
+
+
+### Enterprise apps service provider
+
+In this scenario, a user tried to sign in to an application. The sign-in failed due to a problem with the application (also known as service provider) side of the sign-in flow. Problems detected by this diagnosis typically must be resolved by changing the configuration or fixing problems on the application service.
+
+Resolution for this scenario means signing into the other service and changing some configuration per the diagnostic guidance.
+
+
+
+### Enterprise apps configuration
+
+In this scenario, a sign-in failed due to an application configuration issue for the Azure AD side of the application.
+
+
+
+Resolution for this scenario requires reviewing and updating the configuration of the application in the Enterprise Applications blade entry for the application.
+
+
+
+## Other scenarios
+
+### Security defaults
+
+This scenario covers sign-in events where the userΓÇÖs sign-in was interrupted due to security defaults settings. Security defaults enforce best practice security for your organization and require multifactor authentication (MFA) to be configured and used in many scenarios to prevent password sprays, replay attacks and phishing attempts from being successful.
+
+For more information, see [What are security defaults?](../fundamentals/concept-fundamentals-security-defaults.md)
+
+### Error code insights
+
+When an event does not have a contextual analysis in the sign-in diagnostic an updated error code explanation and relevant content may be shown. The error code insights contain detailed text about the scenario, how to remediate the problem, and any content to read regarding the problem.
+
+### Legacy authentication
+
+This diagnostics scenario diagnosis a sign-in event which was blocked or interrupted since the client was attempting to use Basic (also known as Legacy) Authentication.
+
+Preventing legacy authentication sign-in is recommended as the best practice for security. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI cannot enforce multifactor authentication (MFA), which makes them preferred entry points for adversaries to attack your organization.
+
+For more information, see [How to block legacy authentication to Azure AD with Conditional Access](../conditional-access/block-legacy-authentication.md).
+
+### B2B blocked sign-in due to conditional access
+
+This diagnostic scenario detects a blocked or interrupted sign-in due to the user being from another organization-a B2B sign-in-where a Conditional Access policy requires that the client's device is joined to the resource tenant.
+
+For more information, see [Conditional Access for B2B collaboration users](../external-identities/conditional-access.md).
+
+### Blocked by risk policy
+
+This scenario is where Identity Protection Policy blocks a sign-in attempt due to the sign-in attempt having been identified as risky.
+
+For more information, see [How to configure and enable risk policies](../identity-protection/howto-identity-protection-configure-risk-policies.md).
++++
+## Next steps
+
+- [What is the sign-in diagnostic in Azure AD?](overview-sign-in-diagnostics.md)
active-directory Overview Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
na Previously updated : 06/18/2021 Last updated : 07/07/2021
# What is the sign-in diagnostic in Azure AD?
-Azure Active Directory (Azure AD) provides you with a flexible security model to control what users can do with managed resources. Access to these resources is controlled not only by *who* they are, but also by *how* they access them. Typically, a flexible model comes with a certain degree of complexity because of the number of configuration options you have. Complexity has the potential to increase the risk for errors.
+Determining the reason for a failed sign-in can quickly become a challenging task. You need to analyze what happened during the sign-in attempt, and research the available recommendations to resolve the issue. Ideally, you want to resolve the issue without involving others such as Microsoft support. If you are in a situation like this, you can use the sign-in diagnostic in Azure AD, a tool that helps you investigating sign-ins in Azure AD.
-As an IT admin, you need a solution that gives you insight into the activities in your system. This visibility can let you diagnose and solve problems when they occur. The sign-in diagnostic for Azure AD is an example of such a solution. You can use the diagnostic to analyze what happened during a sign-in attempt and get recommendations for resolving problems without needing to involve Microsoft support.
+This article gives you an overview of what the diagnostic is and how you can use it to troubleshoot sign-in related errors.
-This article gives you an overview of what the solution does and how you can use it.
-## Requirements
+## How it works
-The sign-in diagnostic is available in all editions of Azure AD.
+In Azure AD, sign-in attempts are controlled by:
-You must be a global administrator in Azure AD to use it.
+- **Who** - The user performing a sign in attempt.
+- **How** - How a sign-in attempt was performed.
-## How it works
+For example, you can configure conditional access policies that enable administrators to configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign into the same account from an untrusted network.
-In Azure AD, the response to a sign-in attempt is tied to *who* signs in and *how* they access the tenant. For example, an administrator can typically configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign in with the same account from an untrusted network.
+Due to the greater flexibility of the system to respond to a sign-in attempt, you might end-up in scenarios where you need to troubleshoot sign-ins. The sign-in diagnostic is a tool that is designed to enable self-diagnosis of sign-in issues by:
-Due to the greater flexibility of the system to respond to a sign-in attempt, you might end-up in scenarios where you need to troubleshoot sign-ins. The sign-in diagnostic is a feature that:
+- Analyzing data from sign-in events.
-- Analyzes data from sign-in events.
+- Displaying information about what happened.
-- Displays what happened.
+- Providing recommendations to resolve problems.
-- Provides recommendations for how to resolve problems.
+To start and complete the diagnostic process, you need to:
-The sign-in diagnostic for Azure AD is designed to enable self-diagnosis of sign-in errors. To complete the diagnostic process, you need to:
+1. **Identify event** - Enter information about the sign-in event
-![Diagram showing the sign-in diagnostic.](./media/overview-sign-in-diagnostics/process.png)
+2. **Select event** - Select an event based on the information shared.
-1. Define the scope of the sign-in events you care about.
+3. **Take action** - Review diagnostic results and perform steps.
-2. Select the sign-in you want to review.
-3. Review the diagnostic results.
+### Identify event
-4. Take action.
+To identify the right events for you, you can filter based on the following options:
-### Define scope
+- Name of the user
+- Application
+- Correlation ID or request ID
+- Date and time
-The goal of this step is to define the scope of the sign-in events to investigate. Your scope is either based on a user or on an identifier (correlationId, requestId) and a time range. To narrow down the scope further, you can specify an app name. Azure AD uses the scope information to locate the right events for you.
+![Screenshot showing the filter.](./media/overview-sign-in-diagnostics/sign-in-diagnostics.png)
-### Select sign-in
-Based on your search criteria, Azure AD retrieves all matching sign-in events and presents them in an authentication summary list view.
-![Partial screenshot showing the authentication summary section.](./media/overview-sign-in-diagnostics/authentication-summary.png)
+### Select event
-You can customize the columns displayed in this view.
+Based on your search criteria, Azure AD retrieves all matching sign-in events and presents them in an authentication summary list view.
-### Review diagnostic
+![Screenshot showing the authentication summary list.](./media/overview-sign-in-diagnostics/review-sign-ins.png)
-For the selected sign-in event, Azure AD provides you with diagnostic results.
+You can change the content displayed in the columns based on your preference. Examples are:
-![Partial screenshot showing the diagnostic results section.](./media/overview-sign-in-diagnostics/diagnostics-results.png)
-
-These results start with an assessment, which explains what happened in a few sentences. The explanation helps you to understand the behavior of the system.
-
-Next, you get a summary of the related conditional access policies that were applied to the selected sign-in event. The diagnostic results also include recommended remediation steps to resolve your issue. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
+- Risk details
+- Conditional access status
+- Location
+- Resource ID
+- User type
+- Authentication details
### Take action
-At this point, you should have the information you need to fix your issue.
-
-## Scenarios
-
-The following scenarios are covered by the sign-in diagnostic:
--- Blocked by conditional access--- Failed conditional access--- Multifactor authentication (MFA) from conditional access--- MFA from other requirements--- MFA proof up required--- MFA proof up required (risky sign-in location)--- Successful sign-in-
-### Blocked by conditional access
-
-In this scenario, a sign-in attempt has been blocked by a conditional access policy.
-
-![Screenshot showing access configuration with Block access selected.](./media/overview-sign-in-diagnostics/block-access.png)
-
-The diagnostic section for this scenario shows details about the user sign-in event and the applied policies.
-
-### Failed conditional access
-
-This scenario is typically a result of a sign-in attempt that failed because the requirements of a conditional access policy weren't satisfied. Common examples are:
-
-![Screenshot showing access configuration with common policy examples and Grant access selected.](./media/overview-sign-in-diagnostics/require-controls.png)
--- Require hybrid Azure AD joined device--- Require approved client app--- Require app protection policy-
-The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
-
-### MFA from conditional access
-
-In this scenario, a conditional access policy has the requirement to sign in using multifactor authentication set.
-
-![Screenshot showing access configuration with Require multifactor authentication selected.](./media/overview-sign-in-diagnostics/require-mfa.png)
-
-The diagnostic section for this scenario shows details about the user sign-in attempt and the applied policies.
-
-### MFA from other requirements
-
-In this scenario, a multifactor authentication requirement wasn't enforced by a conditional access policy. For example, multifactor authentication on a per-user basis.
-
-![Screenshot showing multifactor authentication per user configuration.](./media/overview-sign-in-diagnostics/mfa-per-user.png)
-
-The intent of this diagnostic scenario is to provide more details about:
--- The source of the multifactor authentication interrupt-- The result of the client interaction-
-You can also view all details of the user sign-in attempt.
-
-### MFA proof up required
-
-In this scenario, sign-in attempts were interrupted by requests to set up multifactor authentication. This setup is also known as proof up.
-
-Multifactor authentication proof up occurs when a user is required to use multifactor authentication but hasn't configured it yet, or an administrator has required the user to configure it.
-
-The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up.
-
-### MFA proof up required (risky sign-in location)
-
-In this scenario, sign-in attempts were interrupted by a request to set up multifactor authentication from a risky sign-in location.
-
-The intent of this diagnostic scenario is to reveal that the multifactor authentication interruption was due to lack of user configuration. The recommended solution is for the user to complete the proof up, specifically from a network location that doesn't appear risky.
+For the selected sign-in event, you get a diagnostic results. Read through the results to identify action that you can take to fix the problem. These results add recommended steps and shed light on relevant information such as the related policies, sign-in details, and supportive documentation. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
-For example, if a corporate network is defined as a named location, the user should attempt to do the proof up from the corporate network instead.
-### Successful sign-in
+![Screenshot showing the diagnostic results.](./media/overview-sign-in-diagnostics/diagnostic-results.png)
-In this scenario, sign-in events weren't interrupted by conditional access or multifactor authentication.
-This diagnostic scenario provides details about user sign-in events that were expected to be interrupted due to conditional access policies or multifactor authentication.
+## How to access it
-### The account is locked
+To use the diagnostic, you must be signed into the tenant as a global admin or a global reader. If you do not have this level of access, use [Privileged Identity Management, PIM](../privileged-identity-management/pim-resource-roles-activate-your-roles.md), to elevate your access to global admin/reader within the tenant. This will allow you to have temporary access to the diagnostic.
-In this scenario, a user signed-in with incorrect credentials too many times.
+With the correct access level, you can find the diagnostic in various places:
-This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
+**Option A**: Diagnose and Solve Problems
-### Incorrect Credentials Invalid username or password
+![Screenshot showing how to launch sign-in diagnostics from conditional access.](./media/overview-sign-in-diagnostics/troubleshoot-link.png)
-In this scenario, a user tried to sign-in using an invalid username or password.
-This diagnostic scenario provides details about the apps, the number of attempts, the device used, the operating system and the IP address.
+1. Open **Azure Active Directory AAD or Azure AD Conditional Access**.
-### Enterprise apps service provider
+2. From the main menu, click **Diagnose & Solve Problems**.
-In this scenario, a user tried to sign-in to an app, which failed due to a problem with the service provider problem.
+3. Under the **Troubleshooters**, there is a sign-in diagnostic tile.
-### Enterprise apps configuration
+4. Click **Troubleshoot** button.
-In this scenario, a sign-in failed due to an application configuration issue.
+
-#### Error code insights
+
-When an event does not have a contextual analysis in the Sign-in Diagnostic an updated error code explanation and relevant content may be shown. The error code insights will contain detailed text about the scenario, how to remediate the problem and any content to read regarding the problem.
+**Option B**: Sign-in Events
-#### Legacy Authentication
+![Screenshot showing how to launch sign-in diagnostics from Azure AD.](./media/overview-sign-in-diagnostics/sign-in-logs-link.png)
-This diagnostics scenario diagnosis a sign-in event which was blocked or interrupted since the client was attempting to use Basic (also known as Legacy) Authentication.
-Preventing legacy authentication sign-in is recommended as a best practice for security. Legacy authentication protocols like POP, SMTP, IMAP, and MAPI cannot enforce Multi-Factor Authentication (MFA) which makes them preferred entry points for adversaries to attack your organization.
-#### B2B Blocked Sign-in
-This diagnostic scenario detects a blocked or interrupted sign-in due to the user being from another organization-a B2B sign-in-where a Conditional Access policy requires that the clients device is joined to the resource tenant.
+1. Open Azure Active Directory.
-#### Blocked by Risk Policy
+2. On the main menu, in the **Monitoring** section, select **Sign-ins**.
-This scenario is where Identity Protection Policy blocks a sign-in attempt due to the sign-in attempt having been identified as risky.
+3. From the list of sign-ins, select a sign in with a **Failure** status. You can filter your list by Status to make it easier to find failed sign-ins.
-### Security Defaults
+4. The **Activity Details: Sign-ins** tab will open for the selected sign-in. Click on dotted icon to view more menu icons. Select the **Troubleshooting and support** tab.
-This scenario covers sign-in events where the userΓÇÖs sign-in was interrupted due to Security Defaults settings. Security Defaults enforce best practice security for your organization and will require Multi-Factor Authentication (MFA) to be configured and used in many scenarios to prevent password sprays, replay attacks and phishing attempts from being successful.
+5. Click the link to **Launch the Sign-in Diagnostic**.
+
+**Option C**: Support Case
+The diagnostic can also be found when creating a support case to give you the opportunity to self-diagnose before resorting to submitting a case.
## Next steps -- [What are Azure Active Directory reports?](overview-reports.md)-- [What is Azure Active Directory monitoring?](overview-monitoring.md)
+- [Sign in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone <custom private dns zone ResourceId> --fqdn-subdomain <subdomain-name> ```
-### Create a private AKS cluster with a Public DNS address
+## Create a private AKS cluster with a Public DNS address
+
+The Public DNS option can be leveraged to simplify routing options for your Private Cluster.
+
+![Public DNS](https://user-images.githubusercontent.com/50749048/124776520-82629600-df0d-11eb-8f6b-71c473b6bd01.png)
-#### Register the `EnablePrivateClusterPublicFQDN` preview feature
+1. By specifying "None" for the Private DNS Zone when a private cluster is provisioned, a private endpoint (1) and a public DNS zone (2) are created in the cluster-managed resource group. The cluster uses an A record in the private zone to resolve the IP of the private endpoint for communication to the API server.
+
+### Register the `EnablePrivateClusterPublicFQDN` preview feature
To use the new Enable Private Cluster Public FQDN API, you must enable the `EnablePrivateClusterPublicFQDN` feature flag on your subscription.
When ready, refresh the registration of the *Microsoft.ContainerService* resourc
az provider register --namespace Microsoft.ContainerService ```
-#### Create a private AKS cluster with a Public DNS address
+### Create a private AKS cluster with a Public DNS address
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster --enable-managed-identity --assign-identity <ResourceId> --private-dns-zone none --enable-public-fqdn
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | *1.21 GA | | 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA | | 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA |
-| 1.21 | Apr-08-21 | May 2021 | Jun 2021 | 1.24 GA |
+| 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA |
| 1.22 | Aug-04-21 | Sept 2021 | Oct 2021 | 1.25 GA | | 1.23 | Dec 2021 | Jan 2022 | Feb 2022 | 1.26 GA |
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
A successful cluster creation using your own managed identities contains this us
}, ```
-## Bring your own kubelet MI (Preview)
-
+## Bring your own kubelet MI
A Kubelet identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity. ### Prerequisites -- You must have the Azure CLI, version 2.21.1 or later installed.-- You must have the aks-preview, version 0.5.10 or later installed.
+- You must have the Azure CLI, version 2.26.0 or later installed.
### Limitations - Only works with a User-Assigned Managed cluster. - Azure China 21Vianet isn't currently supported.
-First, register the feature flag for Kubelet identity:
-
-```azurecli-interactive
-az feature register --namespace Microsoft.ContainerService -n CustomKubeletIdentityPreview
-```
-
-It takes a few minutes for the status to show *Registered*. You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CustomKubeletIdentityPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ### Create or obtain managed identities If you don't have a control plane managed identity yet, you should go ahead and create one. The following example uses the [az identity create][az-identity-create] command:
api-management Api Management Authentication Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-authentication-policies.md
Both system-assigned identity and any of the multiple user-assigned identity can
``` ```xml
-<authentication-managed-identity resource="api://Client_id_of_Backend"/> <!--Your own Azure AD Application-->
+<authentication-managed-identity resource="Client_id_of_Backend"/> <!--Your own Azure AD Application-->
``` #### Use managed identity and set header manually ```xml
-<authentication-managed-identity resource="api://Client_id_of_Backend"
+<authentication-managed-identity resource="Client_id_of_Backend"
output-token-variable-name="msi-access-token" ignore-error="false" /> <!--Your own Azure AD Application--> <set-header name="Authorization" exists-action="override"> <value>@("Bearer " + (string)context.Variables["msi-access-token"])</value>
For more information working with policies, see:
+ [Policies in API Management](api-management-howto-policies.md) + [Transform APIs](transform-api.md) + [Policy Reference](./api-management-policies.md) for a full list of policy statements and their settings
-+ [Policy samples](./policy-reference.md)
++ [Policy samples](./policy-reference.md)
app-service App Service Web Tutorial Connect Msi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-connect-msi.md
Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now co
In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication): ```powershell
-Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.4.0
+Install-Package Microsoft.Data.SqlClient -Version 2.1.2
+Install-Package Azure.Identity -Version 1.4.0
``` In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string isn't used at all because the local development environment uses a Sqlite database file, and the Azure production environment uses a connection string from App Service. With Active Directory authentication, you want both environments to use the same connection string. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with: ```json
-"Server=tcp:<server-name>.database.windows.net,1433;Database=<database-name>;"
+"Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Device Code Flow; Database=<database-name>;"
```
-Next, you supply the Entity Framework database context with the access token for the SQL Database. In *Data\MyDatabaseContext.cs*, add the following code inside the curly braces of the empty `MyDatabaseContext (DbContextOptions<MyDatabaseContext> options)` constructor:
+> [!NOTE]
+> We use the `Active Directory Device Code Flow` authentication type because this is the closest we can get to a custom option. Ideally, a `Custom Authentication` type would be available. Without a better term to use at this time, we're using `Device Code Flow`.
+>
+
+Next, you need to create a custom authentication provider class to acquire and supply the Entity Framework database context with the access token for the SQL Database. In the *Data\\* directory, add a new class `CustomAzureSQLAuthProvider.cs` with the following code inside:
```csharp
-var connection = (SqlConnection)Database.GetDbConnection();
-connection.AccessToken = (new Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProvider()).GetAccessTokenAsync("https://database.windows.net/").Result;
+public class CustomAzureSQLAuthProvider : SqlAuthenticationProvider
+{
+ private static readonly string[] _azureSqlScopes = new[]
+ {
+ "https://database.windows.net//.default"
+ };
+
+ private static readonly TokenCredential _credential = new DefaultAzureCredential();
+
+ public override async Task<SqlAuthenticationToken> AcquireTokenAsync(SqlAuthenticationParameters parameters)
+ {
+ var tokenRequestContext = new TokenRequestContext(_azureSqlScopes);
+ var tokenResult = await _credential.GetTokenAsync(tokenRequestContext, default);
+ return new SqlAuthenticationToken(tokenResult.Token, tokenResult.ExpiresOn);
+ }
+
+ public override bool IsSupported(SqlAuthenticationMethod authenticationMethod) => authenticationMethod.Equals(SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow);
+}
+```
+
+In *Startup.cs*, update the `ConfigureServices()` method with the following code:
+
+```csharp
+services.AddControllersWithViews();
+services.AddDbContext<MyDatabaseContext>(options =>
+{
+ SqlAuthenticationProvider.SetProvider(
+ SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow,
+ new CustomAzureSQLAuthProvider());
+ var sqlConnection = new SqlConnection(Configuration.GetConnectionString("MyDatabaseContext"));
+ options.UseSqlServer(sqlConnection);
+});
``` > [!NOTE] > This demonstration code is synchronous for clarity and simplicity.
-That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `AzureServiceTokenProvider` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+The preceding code uses the `Azure.Identity` library so that it can authenticate and retrieve an access token for the database, no matter where the code is running. If you're running on your local machine, `DefaultAzureCredential()` loops through a number of options to find a valid account that is logged in. You can read more about the [DefaultAzureCredential class](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet).
-> [!TIP]
-> If the Azure AD user you configured has access to multiple tenants, call `GetAccessTokenAsync("https://database.windows.net/", tenantid)` with the desired tenant ID to retrieve the proper access token.
+That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
What you learned:
Advance to the next tutorial to learn how to map a custom DNS name to your web app. > [!div class="nextstepaction"]
-> [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md)
+> [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md)
app-service Deploy Complex Application Predictably https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-complex-application-predictably.md
Title: Deploy apps predictably with ARM
-description: Learn how to deploy multiple Azure App Service apps as a single unit and in a predictable manner using Azure Resource Management templates and PowerShell scripting.
+description: Learn how to deploy multiple Azure App Service apps as a single unit and in a predictable manner using Azure Resource Manager templates and PowerShell scripting.
ms.assetid: bb51e565-e462-4c60-929a-2ff90121f41d
In the tutorial, you will deploy an application that includes:
In this tutorial, you will use the following tools. Since itΓÇÖs not comprehensive discussion on tools, IΓÇÖm going to stick to the end-to-end scenario and just give you a brief intro to each, and where you can find more information on it. ### Azure Resource Manager templates (JSON)
-Every time you create an app in Azure App Service, for example, Azure Resource Manager uses a JSON template to create the entire resource group with the component resources. A complex template from the [Azure Marketplace](../marketplace/index.yml) can include the database, storage accounts, the App Service plan, the app itself, alert rules, app settings, autoscale settings, and more, and all these templates are available to you through PowerShell. For more information on the Azure Resource Manager templates, see [Authoring Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md)
+Every time you create an app in Azure App Service, for example, Azure Resource Manager uses a JSON template to create the entire resource group with the component resources. A complex template from the [Azure Marketplace](../marketplace/index.yml) can include the database, storage accounts, the App Service plan, the app itself, alert rules, app settings, autoscale settings, and more, and all these templates are available to you through PowerShell. For more information on the Azure Resource Manager templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md)
### Azure SDK 2.6 for Visual Studio The newest SDK contains improvements to the Resource Manager template support in the JSON editor. You can use this to quickly create a resource group template from scratch or open an existing JSON template (such as a downloaded gallery template) for modification, populate the parameters file, and even deploy the resource group directly from an Azure Resource Group solution.
In DevOps, repeatability and predictability are keys to any successful deploymen
## More resources * [Azure Resource Manager Template Language](../azure-resource-manager/templates/syntax.md)
-* [Authoring Azure Resource Manager Templates](../azure-resource-manager/templates/syntax.md)
+* [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md)
* [Azure Resource Manager Template Functions](../azure-resource-manager/templates/template-functions.md) * [Deploy an application with Azure Resource Manager template](../azure-resource-manager/templates/deploy-powershell.md) * [Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md)
To learn about the JSON syntax and properties for resource types deployed in thi
* [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms) * [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) * [Microsoft.Web/sites/slots](/azure/templates/microsoft.web/sites/slots)
-* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
+* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
app-service Monitor App Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-app-service-reference.md
Last updated 04/16/2021
# Monitoring App Service data reference
-See [Monitoring App Service](monitor-app-service.md) for details on collecting and analyzing monitoring data for App Service.
+This reference applies to the use of Azure Monitor for monitoring App Service. See [Monitoring App Service](monitor-app-service.md) for details on collecting and analyzing monitoring data for App Service.
## Metrics
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-app-service.md
Title: Monitoring App Service
+ Title: Monitor App Service with Azure Monitor
description: Start here to learn how to monitor App Service
Last updated 04/16/2021
# Monitoring App Service
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by App Service. App Service uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
-
-To monitor resources with Azure Monitor, you can also use built-in diagnostics to assist with debugging an App Service app. You'll find more on this capability in [enable diagnostic logging for apps in Azure App Service](troubleshoot-diagnostic-logs.md).
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by App Service and shipped to [Azure Monitor](/azure/azure-monitor/overview). You can also use [built-in diagnostics to monitor resources](troubleshoot-diagnostic-logs.md) to assist with debugging an App Service app. If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
> [!NOTE] > Azure Monitor integration with App Service is in [preview](https://aka.ms/appsvcblog-azmon).
The [Activity log](/azure/azure-monitor/platform/activity-log) is a type of plat
For a list of types of resource logs collected for App Service, see [Monitoring App Service data reference](monitor-app-service-reference.md#resource-logs)
-For a list of queryable tables used by Azure Monitor Logs and Log Analytics, see [Monitoring App Service data reference](monitor-app-service-reference.md#azure-monitor-logs-tables)
+For a list of queryable tables used by Azure Monitor Logs and Log Analytics, see [Monitoring App Service data reference](monitor-app-service-reference.md#azure-monitor-logs-tables).
### Sample Kusto queries
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-arc-integration.md
The following table describes the role of each pod that is created by default:
The App Service Kubernetes environment resource is required before apps may be created. It enables configuration common to apps in the custom location, such as the default DNS suffix.
-Only one Kubernetes environment resource may created in a custom location. In most cases, a developer who creates and deploys apps doesn't need to be directly aware of the resource. It can be directly inferred from the provided custom location ID. However, when defining Azure Resource Manager templates, any plan resource needs to reference the resource ID of the environment directly. The custom location values of the plan and the specified environment must match.
+Only one Kubernetes environment resource may be created in a custom location. In most cases, a developer who creates and deploys apps doesn't need to be directly aware of the resource. It can be directly inferred from the provided custom location ID. However, when defining Azure Resource Manager templates, any plan resource needs to reference the resource ID of the environment directly. The custom location values of the plan and the specified environment must match.
## FAQ for App Service, Functions, and Logic Apps on Azure Arc (Preview)
By default, logs from system components are sent to the Azure team. Application
When creating a Kubernetes environment resource, some subscriptions may see a "No registered resource provider found" error. The error details may include a set of locations and api versions that are considered valid. If this happens, it may be that the subscription needs to be re-registered with the Microsoft.Web provider, an operation which has no impact on existing applications or APIs. To re-register, use the Azure CLI to run `az provider register --namespace Microsoft.Web --wait`. Then re-attempt the Kubernetes environment command.
+### Can I deploy the Application services extension on an ARM64 based cluster?
+
+ARM64 based clusters are not supported at this time.
+ ## Next steps [Create an App Service Kubernetes environment (Preview)](manage-create-arc-environment.md)
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/multiple-site-overview.md
Last updated 07/20/2020-+
You can also define wildcard host names in a multi-site listener and up to 5 hos
:::image type="content" source="./media/multiple-site-overview/multisite.png" alt-text="Multi-site Application Gateway"::: > [!IMPORTANT]
-> Rules are processed in the order they are listed in the portal for the v1 SKU. For the v2 SKU, exact matches have higher precedence. It is highly recommended to configure multi-site listeners first prior to configuring a basic listener. This will ensure that traffic gets routed to the right back end. If a basic listener is listed first and matches an incoming request, it gets processed by that listener.
+> Rules are processed in the order they are listed in the portal for the v1 SKU. For v2 SKU use [rule priority](#request-routing-rules-evaluation-order) to specify the processing order. It is highly recommended to configure multi-site listeners first prior to configuring a basic listener. This will ensure that traffic gets routed to the right back end. If a basic listener is listed first and matches an incoming request, it gets processed by that listener.
Requests for `http://contoso.com` are routed to ContosoServerPool, and `http://fabrikam.com` are routed to FabrikamServerPool. Similarly, you can host multiple subdomains of the same parent domain on the same application gateway deployment. For example, you can host `http://blog.contoso.com` and `http://app.contoso.com` on a single application gateway deployment.
+## Request Routing rules evaluation order
+
+While using multi-site listeners, to ensure that the client traffic is routed to the accurate backend, it is important to have the request routing rules be present in the correct order.
+For example, if you have 2 listeners with associated Host name as `*.contoso.com` and `shop.contoso.com` respectively, the listener with the `shop.contoso.com` Host name would have to be processed before the listener with `*.contoso.com`. If the listener with `*.contoso.com` is processed first, then no client traffic would be received by the more specific `shop.contoso.com` listener.
+
+This ordering can be established by providing a 'Priority' field value to the request routing rules associated with the listeners. You can specify an integer value from 1 to 2000 with 1 being the highest priority and 20000 being the lowest priority. In case the incoming client traffic matches with multiple listeners, the request routing rule with highest priority will be used for serving the request.
+
+The priority field only impacts the order of evaluation of a request routing rule, this will not change the order of evaluation of path based rules within a `PathBasedRouting` request routing rule.
+
+>[!NOTE]
+>This feature is currently available only through [Azure PowerShell](tutorial-multiple-sites-powershell.md#add-priority-to-routing-rules) and [Azure CLI](tutorial-multiple-sites-cli.md#add-priority-to-routing-rules). Portal support is coming soon.
+
+>[!NOTE]
+>If you wish to use rule priority, you will have to specify rule-priority field values for all the existing request routing rules. Once the rule priority field is in use, any new routing rule that is created would also need to have a rule priority field value as part of its config.
+ ## Wildcard host names in listener (Preview)
-Application Gateway allows host-based routing using multi-site HTTP(S) listener. Now, you have the ability to use wildcard characters like asterisk (*) and question mark (?) in the host name, and up to 5 host names per multi-site HTTP(S) listener. For example, `*.contoso.com`.
+Application Gateway allows host-based routing using multi-site HTTP(S) listener. Now, you can use wildcard characters like asterisk (*) and question mark (?) in the host name, and up to 5 host names per multi-site HTTP(S) listener. For example, `*.contoso.com`.
-Using a wildcard character in the host name, you can match multiple host names in a single listener. For example, `*.contoso.com` can match with `ecom.contoso.com`, `b2b.contoso.com` as well as `customer1.b2b.contoso.com` and so on. Using an array of host names, you can configure more than one host name for a listener, to route requests to a backend pool. For example, a listener can contain `contoso.com, fabrikam.com` which will accept requests for both the host names.
+Using a wildcard character in the host name, you can match multiple host names in a single listener. For example, `*.contoso.com` can match with `ecom.contoso.com`, `b2b.contoso.com` and `customer1.b2b.contoso.com` and so on. Using an array of host names, you can configure more than one host name for a listener, to route requests to a backend pool. For example, a listener can contain `contoso.com, fabrikam.com` which will accept requests for both the host names.
:::image type="content" source="./media/multiple-site-overview/wildcard-listener-diag.png" alt-text="Wildcard Listener":::
In [Azure PowerShell](tutorial-multiple-sites-powershell.md), you must use `-Hos
In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` instead of `--host-name`. With host-names, you can mention up to 5 host names as comma-separated values and use wildcard characters. For example, `--host-names "*.contoso.com,*.fabrikam.com"`
-### Allowed characters in the host names field:
+### Allowed characters in the host names field
* `(A-Z,a-z,0-9)` - alphanumeric characters * `-` - hyphen or minus
In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` inst
* `*` - can match with multiple characters in the allowed range * `?` - can match with a single character in the allowed range
-### Conditions for using wildcard characters and multiple host names in a listener:
+### Conditions for using wildcard characters and multiple host names in a listener
* You can only mention up to 5 host names in a single listener * Asterisk `*` can be mentioned only once in a component of a domain style name or host name. For example, component1*.component2*.component3. `(*.contoso-*.com)` is valid.
In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` inst
* There can only be a maximum of 4 wildcard characters in a host name. For example, `????.contoso.com`, `w??.contoso*.edu.*` are valid, but `????.contoso.*` is invalid. * Using asterisk `*` and question mark `?` together in a component of a host name (`*?` or `?*` or `**`) is invalid. For example, `*?.contoso.com` and `**.contoso.com` are invalid.
-### Considerations and limitations of using wildcard or multiple host names in a listener:
+### Considerations and limitations of using wildcard or multiple host names in a listener
* [SSL termination and End-to-End SSL](ssl-overview.md) requires you to configure the protocol as HTTPS and upload a certificate to be used in the listener configuration. If it is a multi-site listener, you can input the host name as well, usually this is the CN of the SSL certificate. When you are specifying multiple host names in the listener or use wildcard characters, you must consider the following: * If it is a wildcard hostname like *.contoso.com, you must upload a wildcard certificate with CN like *.contoso.com
In [Azure CLI](tutorial-multiple-sites-cli.md), you must use `--host-names` inst
* You cannot use a regular expression to mention the host name. You can only use wildcard characters like asterisk (*) and question mark (?) to form the host name pattern. * For backend health check, you cannot associate multiple [custom probes](application-gateway-probe-overview.md) per HTTP settings. Instead, you can probe one of the websites at the backend or use "127.0.0.1" to probe the localhost of the backend server. However, when you are using wildcard or multiple host names in a listener, the requests for all the specified domain patterns will be routed to the backend pool depending on the rule type (basic or path-based). * The properties "hostname" takes one string as input, where you can mention only one non-wildcard domain name and "hostnames" takes an array of strings as input, where you can mention up to 5 wildcard domain names. But both the properties cannot be used at once.
-* You cannot create a [redirection](redirect-overview.md) rule with a target listener which uses wildcard or multiple host names.
+* You cannot create a [redirection](redirect-overview.md) rule with a target listener, which uses wildcard or multiple host names.
See [create multi-site using Azure PowerShell](tutorial-multiple-sites-powershell.md) or [using Azure CLI](tutorial-multiple-sites-cli.md) for the step-by-step guide on how to configure wildcard host names in a multi-site listener. ++ ## Host headers and Server Name Indication (SNI) There are three common mechanisms for enabling multiple site hosting on the same infrastructure.
There are three common mechanisms for enabling multiple site hosting on the same
2. Use host name to host multiple web applications on the same IP address. 3. Use different ports to host multiple web applications on the same IP address.
-Currently Application Gateway supports a single public IP address where it listens for traffic. So multiple applications, each with its own IP address is currently not supported.
+Currently Application Gateway supports a single public IP address where it listens for traffic. So multiple applications, each with its own IP address is currently not supported.
-Application Gateway supports multiple applications each listening on different ports, but this scenario requires the applications to accept traffic on non-standard ports. This is often not a configuration that you want.
+Application Gateway supports multiple applications each listening on different ports, but this scenario requires the applications to accept traffic on non-standard ports.
Application Gateway relies on HTTP 1.1 host headers to host more than one website on the same public IP address and port. The sites hosted on application gateway can also support TLS offload with Server Name Indication (SNI) TLS extension. This scenario means that the client browser and backend web farm must support HTTP/1.1 and TLS extension as defined in RFC 6066.
Application Gateway relies on HTTP 1.1 host headers to host more than one websit
Learn how to configure multiple site hosting in Application Gateway * [Using Azure portal](create-multiple-sites-portal.md)
-* [Using Azure PowerShell](tutorial-multiple-sites-powershell.md)
+* [Using Azure PowerShell](tutorial-multiple-sites-powershell.md)
* [Using Azure CLI](tutorial-multiple-sites-cli.md) You can visit [Resource Manager template using multiple site hosting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/application-gateway-multihosting) for an end to end template-based deployment.
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-multiple-sites-cli.md
az network application-gateway http-listener create \
### Add routing rules
-Rules are processed in the order they're listed. Traffic is directed using the first rule that matches regardless of specificity. For example, if you have a rule using a basic listener and a rule using a multi-site listener both on the same port, the rule with the multi-site listener must be listed before the rule with the basic listener in order for the multi-site rule to function as expected.
+Rules are processed in the order they're listed if rule priority field is not used. Traffic is directed using the first rule that matches regardless of specificity. For example, if you have a rule using a basic listener and a rule using a multi-site listener both on the same port, the rule with the multi-site listener must be listed before the rule with the basic listener in order for the multi-site rule to function as expected.
In this example, you create two new rules and delete the default rule created when you deployed the application gateway. You can add the rule using [az network application-gateway rule create](/cli/azure/network/application-gateway/rule#az_network_application_gateway_rule_create).
az network application-gateway rule delete \
--name rule1 \ --resource-group myResourceGroupAG ```
+### Add priority to routing rules
+
+In order to ensure that more specific rules are processed first, use the rule priority field to ensure they have higher priority. Rule priority field must be set for all the existing request routing rules and any new rule that is created later must also have a rule priority value.
+```azurecli-interactive
+az network application-gateway rule create \
+ --gateway-name myAppGateway \
+ --name wccontosoRule \
+ --resource-group myResourceGroupAG \
+ --http-listener wccontosoListener \
+ --rule-type Basic \
+ --priority 200 \
+ --address-pool wccontosoPool
+
+az network application-gateway rule create \
+ --gateway-name myAppGateway \
+ --name shopcontosoRule \
+ --resource-group myResourceGroupAG \
+ --http-listener shopcontosoListener \
+ --rule-type Basic \
+ --priority 100 \
+ --address-pool shopcontosoPool
+
+```
## Create virtual machine scale sets
application-gateway Tutorial Multiple Sites Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/tutorial-multiple-sites-powershell.md
$fabrikamRule = New-AzApplicationGatewayRequestRoutingRule `
-BackendHttpSettings $poolSettings ```
+### Add priority to routing rules
+
+```azurepowershell-interactive
+$contosoRule = New-AzApplicationGatewayRequestRoutingRule `
+ -Name wccontosoRule `
+ -RuleType Basic `
+ -Priority 200 `
+ -HttpListener $wccontosoListener `
+ -BackendAddressPool $wccontosoPool `
+ -BackendHttpSettings $poolSettings
+
+$fabrikamRule = New-AzApplicationGatewayRequestRoutingRule `
+ -Name shopcontosoRule `
+ -RuleType Basic `
+ -Priority 100 `
+ -HttpListener $shopcontosoListener `
+ -BackendAddressPool $shopcontosoPool `
+ -BackendHttpSettings $poolSettings
+```
+ ### Create the application gateway Now that you created the necessary supporting resources, specify parameters for the application gateway using [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku), and then create it using [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway).
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/url-route-overview.md
In the following example, Application Gateway is serving traffic for contoso.com
Requests for http\://contoso.com/video/* are routed to VideoServerPool, and http\://contoso.com/images/* are routed to ImageServerPool. DefaultServerPool is selected if none of the path patterns match. > [!IMPORTANT]
-> For the v1 SKU, rules are processed in the order they are listed in the portal. If a basic listener is listed first and matches an incoming request, it gets processed by that listener. For the v2 SKU, exact matches have higher precedence. However, it is highly recommended to configure multi-site listeners first prior to configuring a basic listener. This ensures that traffic gets routed to the right back end.
+> For both the v1 and v2 SKUs, rules are processed in the order they are listed in the portal. If a basic listener is listed first and matches an incoming request, it gets processed by that listener. However, it is highly recommended to configure multi-site listeners first prior to configuring a basic listener. This ensures that traffic gets routed to the right back end.
## UrlPathMap configuration element
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/operating-system-requirements.md
Title: Azure Automation Update Management Supported Clients
description: This article describes the supported Windows and Linux operating systems with Azure Automation Update Management. Previously updated : 06/29/2021 Last updated : 07/08/2021
The following table lists the supported operating systems for update assessments
|CentOS 6, 7, and 8 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). | |Oracle Linux 6.x and 7.x (x64) | Linux agents require access to an update repository. | |Red Hat Enterprise 6, 7, and 8 (x64) | Linux agents require access to an update repository. |
-|SUSE Linux Enterprise Server 12, 15, and 15.1 (x64) | Linux agents require access to an update repository. |
+|SUSE Linux Enterprise Server 12, 15, 15.1, and 15.2 (x64) | Linux agents require access to an update repository. |
|Ubuntu 14.04 LTS, 16.04 LTS, 18.04 LTS, and 20.04 LTS (x64) |Linux agents require access to an update repository. | > [!NOTE]
azure-app-configuration Howto Convert To The New Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-convert-to-the-new-spring-boot.md
+
+ Title: Convert to the Spring Boot Library
+
+description: Learn how to convert to the new App Configuration Spring Boot Library from the previous version.
++++ Last updated : 07/08/2021++
+# Convert to new App Configuration Spring Boot library
+
+A new version of the App Configuration library for Spring Boot is now available. The version introduces new features such as Push refresh, but also a number of breaking changes. These changes aren't backwards compatible with configuration setups that were using the previous library version. For the following topics.
+
+* Group and Artifact Ids
+* Spring Profiles
+* Configuration loading and reloading
+* Feature flag loading
+
+this article provides a reference on the change and actions needed to migrate to the new library version.
+
+## Group and Artifact ID changed
+
+All of the Azure Spring Boot libraries have had their Group and Artifact IDs updated to match a new format. The new package names are:
+
+```xml
+<dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
+ <version>2.0.0-beta.2</version>
+</dependency>
+<dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
+ <version>2.0.0-beta.2</version>
+</dependency>
+```
+
+## Use of Spring Profiles
+
+In the previous release, Spring Profiles were used as part of the configuration so they could match the format of the configuration files. For example,
+
+```properties
+/<application name>_dev/config.message
+```
+
+This has been changed so the default label(s) in a query are the Spring Profiles with the following format, with a label that matches the Spring Profile:
+
+```properties
+/application/config.message
+```
+
+ To convert to the new format, you can run the bellow commands with your store name:
+
+```azurecli
+az appconfig kv export -n your-stores-name -d file --format properties --key /application_dev* --prefix /application_dev/ --path convert.properties --skip-features --yes
+az appconfig kv import -n your-stores-name -s file --format properties --label dev --prefix /application/ --path convert.properties --skip-features --yes
+```
+
+or use the Import/Export feature in the portal.
+
+When you are completely moved to the new version, you can removed the old keys by running:
+
+```azurecli
+az appconfig kv delete -n ConversionTest --key /application_dev/*
+```
+
+This command will list all of the keys you are about to delete so you can verify no unexpected keys will be removed. Keys can also be deleted in the portal.
+
+## Which configurations are loaded
+
+The default case of loading configuration matching `/applicaiton/*` hasn't changed. The change is that `/${spring.application.name}/*` will not be used in addition automatically anymore unless set. Instead, to use `/${spring.application.name}/*` you can use the new Selects configuration.
+
+```properties
+spring.cloud.azure.appconfiguration.stores[0].selects[0].key-filter=/${spring.application.name}/*
+```
+
+## Configuration reloading
+
+The monitoring of all configuration stores is now disabled by default. A new configuration has been added to the library to allow config stores to have monitoring enabled. In addition, cache-expiration has been renamed to refresh-interval and has also been changed to be per config store. Also if monitoring of a config store is enabled at least one watched key is required to be configured, with an optional label.
+
+```properties
+spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled
+spring.cloud.azure.appconfiguration.stores[0].monitoring.refresh-interval
+spring.cloud.azure.appconfiguration.stores[0].monitoring.trigger[0].key
+spring.cloud.azure.appconfiguration.stores[0].monitoring.trigger[0].label
+```
+
+There has been no change to how the refresh-interval works, the change is renaming the configuration to clarify functionality. The requirement of a watched key makes sure that when configurations are being changed the library will not attempt to load the configurations until all changes are done.
+
+## Feature flag loading
+
+By default, loading of feature flags is now disabled. In addition, Feature Flags now have a label filter as well as a refresh-interval.
+
+```properties
+spring.cloud.azure.appconfiguration.stores[0].feature-flags.enable
+spring.cloud.azure.appconfiguration.stores[0].feature-flags.label-filter
+spring.cloud.azure.appconfiguration.stores[0].monitoring.feature-flag-refresh-interval
+```
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
To add a secret to the vault, you need to take just a few additional steps. In t
## Grant your app access to Key Vault
-Azure App Configuration won't access your Key Vault. Your app will read from Key Vault directly, so you need to grant your app read access to the secrets in your Key Vault. This way, the secret always stays with your app. The access can be granted using either the [Vault access policy ](/azure/key-vault/general/assign-access-policy-portal) or [Azure role-based access control](/azure/key-vault/general/rbac-guide).
+Azure App Configuration won't access your key vault. Your app will read from Key Vault directly, so you need to grant your app read access to the secrets in your key vault. This way, the secret always stays with your app. The access can be granted using either a [Key Vault access policy](../key-vault/general/assign-access-policy-portal.md) or [Azure role-based access control](../key-vault/general/rbac-guide.md).
-You use `DefaultAzureCredential` in your code above. It is an aggregated token credential that tries a number of credential types such as `ManagedIdentityCredential`, `SharedTokenCacheCredential`, `VisualStudioCredential`, etc. automatically. See [DefaultAzureCredential Class](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet) for more information. You can replace it with any credential type explicitly. However, using `DefaultAzureCredential` enables you to have the same code that runs in both local and Azure environments. For example, you grant your own credential access to your Key Vault. The `DefaultAzureCredential` will fall back to `SharedTokenCacheCredential` or `VisualStudioCredential` automatically when you use Visual Studio for local development. After your app is deployed to one of Azure services with managed identity enabled, such as App Service, Azure Kubernetes Service, or Azure Container Instance, you grant the managed identity of the Azure service permission to access to your Key Vault. The `DefaultAzureCredential` will use `ManagedIdentityCredential` automatically when your app is running in Azure. You can leverage the same managed identity to authenticate with both App Configuration and Key Vault. For more information, see [How to use managed identities to access App Configuration](/azure/azure-app-configuration/howto-integrate-azure-managed-service-identity).
+You use `DefaultAzureCredential` in your code above. It's an aggregated token credential that automatically tries a number of credential types, like `EnvironmentCredential`, `ManagedIdentityCredential`, `SharedTokenCacheCredential`, and `VisualStudioCredential`. For more information, see [DefaultAzureCredential Class](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet). You can replace `DefaultAzureCredential` with any credential type explicitly. However, using `DefaultAzureCredential` enables you to have the same code that runs in both local and Azure environments. For example, you grant your own credential access to your key vault. `DefaultAzureCredential` automatically falls back to `SharedTokenCacheCredential` or `VisualStudioCredential` when you use Visual Studio for local development.
+
+Alternatively, you can set the AZURE_TENANT_ID, AZURE_CLIENT_ID, and AZURE_CLIENT_SECRET environment variables, and `DefaultAzureCredential` will use the client secret you have via the `EnvironmentCredential` to authenticate with your key vault. After your app is deployed to an Azure service with managed identity enabled, such as Azure App Service, Azure Kubernetes Service, or Azure Container Instance, you grant the managed identity of the Azure service permission to access your key vault. `DefaultAzureCredential` automatically uses `ManagedIdentityCredential` when your app is running in Azure. You can use the same managed identity to authenticate with both App Configuration and Key Vault. For more information, see [How to use managed identities to access App Configuration](howto-integrate-azure-managed-service-identity.md).
## Build and run the app locally
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-template.md
Title: Enable VM extension using Azure Resource Manager template description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments using an Azure Resource Manager template. Previously updated : 04/13/2021 Last updated : 07/08/2021
To use the Azure Defender integrated scanner extension, the following sample is
}, "resources": [ {
- "type": "resourceType/providers/WindowsAgent.AzureSecurityCenter",
+ "type": "Microsoft.HybridCompute/machines/providers/serverVulnerabilityAssessments",
"name": "[concat(parameters('vmName'), '/Microsoft.Security/default')]", "apiVersion": "[parameters('apiVersionByEnv')]" }
To use the Azure Defender integrated scanner extension, the following sample is
}, "resources": [ {
- "type": "resourceType/providers/LinuxAgent.AzureSecurityCenter",
+ "type": "Microsoft.HybridCompute/machines/providers/serverVulnerabilityAssessments",
"name": "[concat(parameters('vmName'), '/Microsoft.Security/default')]", "apiVersion": "[parameters('apiVersionByEnv')]" }
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
No, your cache name and keys are unchanged during a scaling operation.
### How does scaling work?
-* When a **Basic** cache is scaled to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
-* When a **Basic** cache is scaled to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.
-* When a **Standard** cache is scaled to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
+* When you scale a **Basic** cache to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
+* When you scale a **Basic** cache to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.
+* When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
* When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards. * When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-monitoring.md
In addition to automatic dependency data collection, you can also use one of the
+ [Log custom telemetry in C# functions](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions) + [Log custom telemetry in JavaScript functions](functions-reference-node.md#log-custom-telemetry) ++ [Log custom telemetry in Python functions](functions-reference-python.md#log-custom-telemetry) ## Writing to logs
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-indoor-module.md
To use the globally hosted Azure Content Delivery Network version of the *Azure
2. Reference the *Azure Maps Indoor* module JavaScript and Style Sheet in the `<head>` element of the HTML file: ```html
- <link rel="stylesheet" href="node_modules/azure-maps-drawing-tools/dist/atlas-indoor.min.css" type="text/css" />
- <script src="node_modules/azure-maps-drawing-tools/dist/atlas-indoor.min.js"></script>
+ <link rel="stylesheet" href="node_modules/azure-maps-indoor/dist/atlas-indoor.min.css" type="text/css" />
+ <script src="node_modules/azure-maps-indoor/dist/atlas-indoor.min.js"></script>
``` ## Instantiate the Map object
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/08/2021 Last updated : 07/08/2021 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | workspaces | resource group | 3-30 | Alphanumerics, underscores, and hyphens |
+> | workspaces | resource group | 3-64 | Alphanumerics, underscores, and hyphens |
## Microsoft.DataFactory
azure-sql Connect Application Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connect-application-instance.md
Previously updated : 02/25/2021 Last updated : 07/08/2021 # Connect your application to Azure SQL Managed Instance
There are two options for connecting virtual networks:
Peering is preferable because it uses the Microsoft backbone network, so from the connectivity perspective, there is no noticeable difference in latency between virtual machines in a peered virtual network and in the same virtual network. Virtual network peering is to supported between the networks in the same region. Global virtual network peering is also supported with the limitation described in the note below. > [!IMPORTANT]
-> [On 9/22/2020 we announced global virtual network peering for newly created virtual clusters](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). That means that global virtual network peering is supported for SQL Managed Instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL Managed Instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details.
+> [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/en-us/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring [maintenance window](../database/maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
## Connect from on-premises
azure-video-analyzer Detect Motion Record Video Clips Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/detect-motion-record-video-clips-cloud.md
When you use run this quickstart, events will be sent to the IoT Hub. To see the
1. Expand the **Devices** node. 1. Right-click on `avasample-iot-edge-device`, and select **Start Monitoring Built-in Event Endpoint**.
- > [!NOTE]
- > You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
- ```
- Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
- ```
+ [!INCLUDE [provide-builtin-endpoint](./includes/common-includes/provide-builtin-endpoint.md)]
## Use direct method calls to analyze live video
azure-video-analyzer Faq Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/faq-edge.md
Solutions vary depending on the communication protocol that's used by the infere
In your Video Analyzer topology, you instantiate two live pipelines with different inference URLs, as shown here:
- 1st live pipeline: inference server URL = `http://avaextension1:44001/score`
+ 1st live pipeline: inference server URL = `http://avaextension1:44000/score`
2nd live pipeline: inference server URL = `http://avaextension2:44001/score` *Use the gRPC protocol*:
azure-video-analyzer Get Started Detect Motion Emit Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events.md
When you use run this quickstart, events will be sent to the IoT Hub. To see the
1. Expand the **Devices** node. 1. Right-click on `avasample-iot-edge-device`, and select **Start Monitoring Built-in Event Endpoint**.
- > [!NOTE]
- > You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
- ```
- Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
- ```
+ [!INCLUDE [provide-builtin-endpoint](./includes/common-includes/provide-builtin-endpoint.md)]
## Use direct method calls
azure-video-analyzer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/overview.md
To learn about compliance, privacy and security in Video Analyzer visit the Micr
## Next steps
-* Follow the [Quickstart: Get started - Azure Video Analyzer](get-started-detect-motion-emit-events.md) article to see how you can run motion detection on a live video feed.
+* Follow the [Quickstart: Get started with Azure Video Analyzer](get-started-detect-motion-emit-events.md) article to see how you can run motion detection on a live video feed.
* Review [terminology](terminology.md)
azure-video-analyzer Record Event Based Live Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-event-based-live-video.md
This step creates the IoT Edge deployment manifest at src/edge/config/deployment
If this is your first tutorial with Video Analyzer, Visual Studio Code prompts you to input the IoT Hub connection string. You can copy it from the appsettings.json file.
-> [!NOTE]
-> You might be asked to provide Built-in endpoint information for the IoT Hub. To get that information, in Azure portal, navigate to your IoT Hub and look for **Built-in endpoints** option in the left navigation pane. Click there and look for the **Event Hub-compatible endpoint** under **Event Hub compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
- ```
- Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
- ```
Next, Visual Studio Code asks you to select an IoT Hub device. Select your IoT Edge device, which should be avasample-iot-edge-device.
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-protection-matrix.md
Azure Backup Server can protect cluster workloads that are located in the same d
* File Server * Hyper-V
- These workloads can be running on a single server or in a cluster configuration. To protect a workload that isn't in a trusted domain, see [Prepare computers in workgroups and untrusted domains](/system-center/dpm/prepare-environment-for-dpm) for exact details of what's supported and what authentication is required.
+ These workloads can be running on a single server or in a cluster configuration. To protect a workload that isn't in a trusted domain, see [Prepare computers in workgroups and untrusted domains](/system-center/dpm/back-up-machines-in-workgroups-and-untrusted-domains?view=sc-dpm-2019&preserve-view=true#supported-scenarios) for exact details of what's supported and what authentication is required.
## Unsupported data types
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
# Migrate Azure Cloud Services (classic) to Azure Cloud Services (extended support)
+This document provides overview for migrating Cloud Services (classic) to Cloud Services (extended support). Cloud Services (extended support) provides two paths for customers to migrate from Azure Service Manager to Azure Resource
+
+- **Redeploy**: Customers can deploy a new cloud service directly in Azure Resource Manager and then delete the old cloud service in Azure Service Manager after thorough validation. Redeploy provides more control and a self-paced migration. Choose this path if you desire control about how the new service components are named and how they are organized. You are also in control of pace of building new services and deleting the old deployments.
+
+- **In-place migration**: The In-place migration tool enables a seamless, platform orchestrated migration of existing Cloud Services (classic) deployments to Cloud Services (extended support). In-place migration provides less control but is faster paced. Choose this option if you would like the platform to define the basic settings for you and orchestrate a quick migration. The Cloud Services (classic) resources are deleted as soon as migration is complete successfully.
+
+## Redeploy Overview
+
+A new Cloud Service (extended support) can be deployed directly in Azure Resource Manager using the following client tools:
+
+- [Deploy a cloud service ΓÇô Portal](deploy-portal.md)
+- [Deploy a cloud service ΓÇô PowerShell](deploy-powershell.md)
+- [Deploy a cloud service ΓÇô Template](deploy-template.md)
+- [Deploy a cloud service ΓÇô SDK](deploy-sdk.md)
+- [Deploy a cloud service ΓÇô Visual Studio](/visualstudio/azure/cloud-services-extended-support?context=%2fazure%2fcloud-services-extended-support%2fcontext%2fcontex)
++
+## Migration tool Overview
+ This article provides an overview on the platform-supported migration tool and how to use it to migrate [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Azure Cloud Services (extended support)](overview.md). The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](../virtual-machines/migration-classic-resource-manager-overview.md).
To perform this migration, you must be added as a coadministrator for the subscr
## How is migration for Cloud Services (classic) different from Virtual Machines (classic)? Azure Service Manager supports two different compute products, [Azure Virtual Machines (classic)](/previous-versions/azure/virtual-machines/windows/classic/tutorial-classic) and [Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) or Web/ Worker roles. The two products differ based on the deployment type that lies within the Cloud Service. Azure Cloud Services (classic) uses Cloud Service containing deployments with Web/Worker roles. Azure Virtual Machines (classic) uses a cloud service containing deployments with IaaS VMs.
-The list of supported scenarios differ between Cloud Services (classic) and Virtual Machines (classic) because of differences in the deployment types.
+The list of supported scenarios differs between Cloud Services (classic) and Virtual Machines (classic) because of differences in the deployment types.
## Migration steps
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/label-tool.md
You will need an Azure subscription ([create one for free](https://azure.microso
You'll use the Docker engine to run the sample labeling tool. Follow these steps to set up the Docker container. For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/). > [!TIP]
-> The OCR Form Labeling Tool is also available as an open source project on GitHub. The tool is a TypeScript web application built using React + Redux. To learn more or contribute, see the [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md#run-as-web-application) repo. To try out the tool online, go to the [FOTT website](https://fott.azurewebsites.net/).
+> The OCR Form Labeling Tool is also available as an open source project on GitHub. The tool is a TypeScript web application built using React + Redux. To learn more or contribute, see the [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md#run-as-web-application) repo. To try out the tool online, go to the [FOTT website](https://fott-2-1.azurewebsites.net/).
1. First, install Docker on a host computer. This guide will show you how to use local computer as a host. If you want to use a Docker hosting service in Azure, see the [Deploy the sample labeling tool](deploy-label-tool.md) how-to guide.
cognitive-services Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/cost-management.md
+
+ Title: Cost management with Azure Metrics Advisor
+
+description: Learn about cost management and pricing for Azure Metrics Advisor
+++++ Last updated : 07/06/2021+++
+# Azure Metrics Advisor cost management
+
+Azure Metrics Advisor monitors the performance of your organization's growth engines, including sales revenue and manufacturing operations. Quickly identify and fix problems through a powerful combination of monitoring in near-real time, adapting models to your scenario, and offering granular analysis with diagnostics and alerting. You will only be charged for the time series that are analyzed by the service. There's no up-front commitment or minimum fee.
+
+> [!NOTE]
+> This article discusses how pricing is calculated to assist you with planning and cost management when using Azure Metric Advisor. The prices in this article do not reflect actual prices and are for example purposes only. For the latest pricing information please refer to the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
+
+## Key points about cost management and pricing
+
+- You will be charged for the number of **distinct time series** analyzed during a month. If one data point is analyzed for a time series, it will be calculated as well.
+- The number of distinct time series is **irrespective** of its granularity. An hourly time series and a daily time series will be charged at the same price.
+- You will be charged based on the tiered pricing structure listed below. The first day of next month will initialize a new statistic window.
+- The more time series you onboard to the service for analysis, the lower price you pay for each time series.
+
+**Again keep in mind, the prices below are for example purposes only**. For the latest pricing information, consult the [official pricing page for Metrics Advisor](https://azure.microsoft.com/pricing/details/metrics-advisor/).
+
+| Analyzed time series /month| $ per time series |
+|--|--|
+| Free: first 25 time series | $- |
+| 26 time series - 1k time series | $0.75 |
+| 1k time series - 5k time series | $0.50 |
+| 5k time series - 10k time series | $0.25|
+| 20k time series - 50k time series| $0.10|
+| >50k time series | $0.05 |
++
+To help you get a basic understanding of Metrics Advisor and start to explore the service, there's an included amount being offered to allow you to analyze up to 25 time series for free.
+
+## Pricing examples
+
+### Example 1
+<!-- introduce statistic window-->
+
+In month 1, if a customer has onboarded a data feed with 25 time series for analyzing the first week. Afterwards, they onboard another data feed with 30 time series the second week. But at the third week, they delete 30 time series that were onboarded during the second week. Then there are **55** distinct time series being analyzed in month 1, the customer will be charged for **30** of them (exclude the 25 time series in the free tier) and falls under tier 1. The monthly cost is: 30 * $0.75 = **$22.5**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 30 (55-25) time series | $0.75 | $22.5 |
+| **Total = 30 time series** | | **$22.5 per month** |
+
+In month 2, the customer has not onboarded or deleted any time series. Then there are 25 analyzed time series in month 2. No cost will be introduced.
+
+### Example 2
+<!-- introduce how time series is calculated-->
+
+A business planner needs to track the company's revenue as the indicator of business healthiness. Usually there's a week by week pattern, the customer onboards the metrics into Metrics Advisor for analyzing anomalies. Metrics Advisor is able to learn the pattern from historical data and perform detection on follow-up data points. There might be a sudden drop detected as an anomaly, which may indicate an underlying issue, like a service outage or a promotional offer not working as expected. There might also be an unexpected spike detected as an anomaly, which may indicate a highly successful marketing campaign or a significant customer win.
+
+The metric is analyzed on **100 product categories** and **10 regions**, then the number of distinct time series being analyzed is calculated as:
+
+```
+1(Revenue) * 100 product categories * 10 regions = 1,000 analyzed time series
+```
+
+Based on the tiered pricing model described above, 1,000 analyzed time series per month is charged at (1,000 - 25) * $0.75 = **$731.25**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 975 (1,000-25) time series | $0.75 | $731.25 |
+| **Total = 30 time series** | | **$731.25 per month** |
+
+### Example 3
+<!-- introduce cost for multiple metrics and -->
+
+After validating detection results on the revenue metric, the customer would like to onboard two more metrics to be analyzed. One is cost, another is DAU(daily active user) of their website. They would also like to add a new dimension with **20 channels**. Within the month, 10 out of the 100 product categories are discontinued after the first week, and are not analyzed further. In addition, 10 new product categories are introduced in the third week of the month, and the corresponding time series are analyzed for half of the month. Then the number of distinct time series being analyzed are calculated as:
+
+```
+3(Revenue, cost and DAU) * 110 product categories * 10 regions * 20 channels = 66,000 analyzed time series
+```
+
+Based on the tiered pricing model described above, 66,000 analyzed time series per month fall into tier 5 and will be charged at **$10281.25**.
+
+| Volume tier | $ per time series | $ per month |
+| | -- | -- |
+| First 975 (1,000-25) time series | $0.75 | $731.25 |
+| Next 4,000 time series | $0.50 | $2,000 |
+| Next 15,000 time series | $0.25 | $3,750 |
+| Next 30,000 time series | $0.10 | $3,000 |
+| Next 16,000 time series | $0.05 | $800 |
+| **Total = 65,975 time series** | | **$10281.25 per month** |
+
+## Next steps
+
+- [Manage your data feeds](how-tos/manage-data-feeds.md)
+- [Configurations for different data sources](data-feeds-from-different-sources.md)
+- [Configure metrics and fine tune detection configuration](how-tos/configure-metrics.md)
++
cognitive-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/data-feeds-from-different-sources.md
Use this article to find the settings and requirements for connecting different
|[**Azure Cosmos DB (SQL)**](#cosmosdb) | Basic | |[**Azure Data Explorer (Kusto)**](#kusto) | Basic<br>Managed Identity<br>Service principal<br>Service principal from key vault | |[**Azure Data Lake Storage Gen2**](#adl) | Basic<br>Data Lake Gen2 Shared Key<br>Service principal<br>Service principal from key vault |
+|[**Azure Event Hubs**](#eventhubs) | Basic |
|[**Azure Log Analytics**](#log) | Basic<br>Service principal<br>Service principal from key vault | |[**Azure SQL Database / SQL Server**](#sql) | Basic<br>Managed Identity<br>Service principal<br>Service principal from key vault<br>Azure SQL Connection String | |[**Azure Table Storage**](#table) | Basic |
The following sections specify the parameters required for all authentication ty
* **Blob Template**: Metrics Advisor uses path to find the json file in your Blob storage. This is an example of a Blob file template, which is used to find the json file in your Blob storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. "%Y/%m" is the path, if you have "%d" in your path, you can add after "%m". If your JSON file is named by date, you could also use `%Y-%m-%d-%h-%M.json`. The following parameters are supported:
- * `%Y` is the year formatted as `yyyy`
- * `%m` is the month formatted as `MM`
- * `%d` is the day formatted as `dd`
- * `%h` is the hour formatted as `HH`
- * `%M` is the minute formatted as `mm`
+
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
- For example, in the following dataset, the blob template should be "%Y/%m/%d/00/JsonFormatV2.json".
+ For example, in the following dataset, the blob template should be "%Y/%m/%d/00/JsonFormatV2.json".
- ![blob template](media/blob-template.png)
+ ![blob template](media/blob-template.png)
* **JSON format version**: Defines the data schema in the JSON files. Currently Metrics Advisor supports two versions, you can choose one to fill in the field:
- * **v1** (Default value)
+ * **v1** (Default value)
Only the metrics *Name* and *Value* are accepted. For example:
The following sections specify the parameters required for all authentication ty
{"count":11, "revenue":1.23} ```
- * **v2**
+ * **v2**
The metrics *Dimensions* and *timestamp* are also accepted. For example:
The following sections specify the parameters required for all authentication ty
] ```
- Only one timestamp is allowed per JSON file.
+ Only one timestamp is allowed per JSON file.
## <span id="cosmosdb">Azure Cosmos DB (SQL)</span>
The following sections specify the parameters required for all authentication ty
The account name is the same as **Basic** authentication type.
- **Step1:** Create and register an Azure AD application and then authorize it to access database, see detail in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app) documentation.
+ **Step 1:** Create and register an Azure AD application and then authorize it to access database, see detail in [Create an AAD app registration](/azure/data-explorer/provision-azure-ad-app) documentation.
- **Step2:** Assign roles.
+ **Step 2:** Assign roles.
+
1. In the Azure portal, go to the **Storage accounts** service. 2. Select the ADLS Gen2 account to use with this application registration.
The following sections specify the parameters required for all authentication ty
4. Click **+ Add** and select **Add role assignment** from the dropdown menu. 5. Set the **Select** field to the Azure AD application name and set role to **Storage Blob Data Contributor**. Click **Save**.
+
![lake-service-principals](media/datafeeds/adls-gen-2-app-reg-assign-roles.png) **Step 3:** [Create a credential entity](how-tos/credential-entity.md) in Metrics Advisor, so that you can choose that entity when adding data feed for Service Principal authentication type.
-
- * **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault.
- The account name is the same as *Basic* authentication type.
-
+ * **Service Principal From Key Vault** authentication type: Key Vault helps to safeguard cryptographic keys and secret values that cloud apps and services use. By using Key Vault, you can encrypt keys and secret values. You should create a service principal first, and then store the service principal inside Key Vault. You can go through [Create a credential entity for Service Principal from Key Vault](how-tos/credential-entity.md#sp-from-kv) to follow detailed procedure to set service principal from key vault. The account name is the same as *Basic* authentication type.
-* **Account Key**(only *Basic* needs): Specify the account key to access your Azure Data Lake Storage Gen2. This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
+* **Account Key** (only *Basic* needs): Specify the account key to access your Azure Data Lake Storage Gen2. This could be found in Azure Storage Account (Azure Data Lake Storage Gen2) resource in **Access keys** setting.
* **File System Name (Container)**: Metrics Advisor will expect your time series data stored as Blob files (one Blob per timestamp) under a single container. This is the container name field. This can be found in your Azure storage account (Azure Data Lake Storage Gen2) instance, and click **'Containers'** in **'Data Lake Storage'** section, then you'll see the container name.
-* **Directory Template**:
- This is the directory template of the Blob file.
- The following parameters are supported:
- * `%Y` is the year formatted as `yyyy`
- * `%m` is the month formatted as `MM`
- * `%d` is the day formatted as `dd`
- * `%h` is the hour formatted as `HH`
- * `%M` is the minute formatted as `mm`
+* **Directory Template**: This is the directory template of the Blob file. The following parameters are supported:
+
+ * `%Y` is the year formatted as `yyyy`
+ * `%m` is the month formatted as `MM`
+ * `%d` is the day formatted as `dd`
+ * `%h` is the hour formatted as `HH`
+ * `%M` is the minute formatted as `mm`
Query sample for a daily metric: `%Y/%m/%d`. Query sample for an hourly metric: `%Y/%m/%d/%h`.
-
- * **File Template**: Metrics Advisor uses path to find the json file in your Blob storage. This is an example of a Blob file template, which is used to find the json file in your Blob storage: `%Y/%m/FileName_%Y-%m-%d-%h-%M.json`. `%Y/%m` is the path, if you have `%d` in your path, you can add after `%m`.
+
The following parameters are supported:
+
* `%Y` is the year formatted as `yyyy` * `%m` is the month formatted as `MM` * `%d` is the day formatted as `dd`
The following sections specify the parameters required for all authentication ty
{"date": "2018-01-01T00:00:00Z", "market":"zh-cn", "count":22, "revenue":4.56} ] ```
-<!--
+ ## <span id="eventhubs">Azure Event Hubs</span>
-* **Connection String**: This can be found in 'Shared access policies' in your Event Hubs instance. Also for the 'EntityPath', it could be found by clicking into your Event Hubs instance and clicking at 'Event Hubs' in 'Entities' blade. Items that listed can be input as EntityPath.
+
+* **Limitations**: There are some limitations with Metrics Advisor Event Hub integration.
+
+ * Metrics Advisor Event Hubs integration doesn't currently support more than 3 active data feeds in one Metrics Advisor instance in public preview.
+ * Metrics Advisor will always start consuming messages from the latest offset, including when re-activating a paused data feed.
+
+ * Messages during the data feed pause period will be lost.
+ * The data feed ΓÇÿingestion start timeΓÇÖ is set to the current UTC timestamp automatically when created and is for reference purposes only.
+created, and for reference only.
+
+ * Only one data feed can be used per consumer group . To reuse a consumer group from another deleted data feed, you need to wait at least 10 minutes after deletion.
+data feed, it needs to wait at least 10 minutes after deletion.
+ * The connection string and consumer group cannot be modified after the data feed is created.
+ * About messages in Event Hubs: Only JSON is supported, and the JSON values cannot be a nested JSON object. The top-level element can be a JSON object or a JSON array.
+
+ Valid messages as follows:
+
+ ``` JSON
+ Single JSON object
+ {
+ "metric_1": 234,
+ "metric_2": 344,
+ "dimension_1": "name_1",
+ "dimension_2": "name_2"
+ }
+ ```
+
+ ``` JSON
+ JSON array
+ [
+ {
+ "timestamp": "2020-12-12T12:00:00", "temperature": 12.4,
+ "location": "outdoor"
+ },
+ {
+ "timestamp": "2020-12-12T12:00:00", "temperature": 24.8,
+ "location": "indoor"
+ }
+ ]
+ ```
++
+* **Connection String**: Navigate to the **Event Hubs Instance** first. Then add a new policy or choose an existing Shared access policy. Copy the connection string in the pop-up panel.
+ ![eventhubs](media/datafeeds/entities-eventhubs.jpg)
+
+ ![shared access policies](media/datafeeds/shared-access-policies.jpg)
+
+ Here's an example of a connection string:
+ ```
+ Endpoint=<Server>;SharedAccessKeyName=<SharedAccessKeyName>;SharedAccessKey=<SharedAccess Key>;EntityPath=<EntityPath>
+ ```
+ * **Consumer Group**: A [consumer group](https://docs.microsoft.com/azure/event-hubs/event-hubs-features#consumer-groups) is a view (state, position, or offset) of an entire event hub.
-Event Hubs use the latest offset of a consumer group to consume (subscribe from) the data from data source. Therefore a dedicated consumer group should be created for one data feed in your Metrics Advisor instance.
-* **Timestamp**: Metrics Advisor uses the Event Hubs timestamp as the event timestamp if the user data source does not contain a timestamp field.
-The timestamp field must match one of these two formats:
-* "YYYY-MM-DDTHH:MM:SSZ" format;
-* * Number of seconds or milliseconds from the epoch of 1970-01-01T00:00:00Z.
- No matter which timestamp field it left aligns to granularity.For example, if timestamp is "2019-01-01T00:03:00Z", granularity is 5 minutes, then Metrics Advisor aligns the timestamp to "2019-01-01T00:00:00Z". If the event timestamp is "2019-01-01T00:10:00Z", Metrics Advisor uses the timestamp directly without any alignment.
>
+This can be found on the "Consumer Groups" menu of an Azure Event Hubs instance. A consumer group can only serve one data feed, otherwise, onboard and ingestion will fail. It is recommended that you create a new consumer group for each data feed.
+* **Timestamp**(optional): Metrics Advisor uses the Event Hubs timestamp as the event timestamp if the user data source does not contain a timestamp field. The timestamp field is optional. If no timestamp column is chosen, we will use the enqueued time as the timestamp.
+
+ The timestamp field must match one of these two formats:
+
+ * "YYYY-MM-DDTHH:MM:SSZ" format;
+ * Number of seconds or milliseconds from the epoch of 1970-01-01T00:00:00Z.
+ No matter which timestamp field it will left align to granularity. For example, if timestamp is "2019-01-01T00:03:00Z", granularity is 5 minutes, then Metrics Advisor aligns the timestamp to "2019-01-01T00:00:00Z". If the event timestamp is "2019-01-01T00:10:00Z", Metrics Advisor uses the timestamp directly without any alignment.
+ ## <span id="log">Azure Log Analytics</span> There are three authentication types for Azure Log Analytics, they are **Basic**, **Service Principal** and **Service Principal From KeyVault**. * **Basic**: You need to fill in **Tenant ID**, **Client ID**, **Client Secret**, **Workspace ID**. To get **Tenant ID**, **Client ID**, **Client Secret**, see [Register app or web API](../../active-directory/develop/quickstart-register-app.md).
+
* **Tenant ID**: Specify the tenant ID to access your Log Analytics. * **Client ID**: Specify the client ID to access your Log Analytics. * **Client Secret**: Specify the client secret to access your Log Analytics.
cognitive-services Text Offsets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/text-offsets.md
In .NET, consider using the [StringInfo](/dotnet/api/system.globalization.string
The Text Analytics API returns these textual elements as well, for convenience.
-## Offsets in API version 3.1-preview
+## Offsets in API version 3.1
In version 3.1 of the API, all Text Analytics API endpoints that return an offset will support the `stringIndexType` parameter. This parameter adjusts the `offset` and `length` attributes in the API output to match the requested string iteration scheme. Currently, we support three types:
cognitive-services Text Analytics For Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-for-health.md
Previously updated : 06/16/2021 Last updated : 06/18/2021
-# How to: Use Text Analytics for health (preview)
+# How to: Use Text Analytics for health
> [!IMPORTANT]
-> Text Analytics for health is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Text Analytics for health (preview) should not be implemented or deployed in any production use. Text Analytics for health is not intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability is not designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Text Analytics for health. The Customer must separately license any and all source vocabularies it intends to use under the terms set for that [UMLS Metathesaurus License Agreement Appendix](https://www.nlm.nih.gov/research/umls/knowledge_sources/metathesaurus/release/license_agreement_appendix.html) or any future equivalent link. The Customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
+> Text Analytics for health is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Text Analytics for health is not intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability is not designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Text Analytics for health. The customer must separately license any and all source vocabularies it intends to use under the terms set for that [UMLS Metathesaurus License Agreement Appendix](https://www.nlm.nih.gov/research/umls/knowledge_sources/metathesaurus/release/license_agreement_appendix.html) or any future equivalent link. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
Text Analytics for health is a feature of the Text Analytics API service that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records. There are two ways to utilize this service:
The latest prerelease of the Text Analytics client library enables you to call T
You must have JSON documents in this format: ID, text, and language.
-Document size must be under 5,120 characters per document. For the maximum number of documents permitted in a collection, see the [data limits](../concepts/data-limits.md?tabs=version-3) article under Concepts. The collection is submitted in the body of the request. If your text exceeds this limit, consider splitting the text into separate requests. For best results, split text between sentences.
+Document size must be under 5,120 characters per document. For the maximum number of documents permitted in a collection, see the [data limits](../concepts/data-limits.md?tabs=version-3) article under Concepts. The collection is submitted in the body of the request. If your text exceeds this limit, consider splitting the text into separate requests. For best results, split text between sentences.
### Structure the API request for the hosted asynchronous web API
-For both the container and hosted web API, you must create a POST request. You can [use Postman](text-analytics-how-to-call-api.md), a cURL command or the **API testing console** in the [Text Analytics for health hosted API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/Health) to quickly construct and send a POST request to the hosted web API in your desired region. In the API v3.1-preview.5 endpoint, the `loggingOptOut` boolean query parameter can be used to enable logging for troubleshooting purposes. It's default is TRUE if not specified in the request query.
+For both the container and hosted web API, you must create a POST request. You can [use Postman](text-analytics-how-to-call-api.md), a cURL command or the **API testing console** in the [Text Analytics for health hosted API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Health) to quickly construct and send a POST request to the hosted web API in your desired region. In the API v3.1 endpoint, the `loggingOptOut` boolean query parameter can be used to enable logging for troubleshooting purposes. It's default is TRUE if not specified in the request query.
-Send the POST request to `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/health/jobs`
+Send the POST request to `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/health/jobs`
Below is an example of a JSON file attached to the Text Analytics for health API request's POST body: ```json
example.json
Since this POST request is used to submit a job for the asynchronous operation, there is no text in the response object. However, you need the value of the operation-location KEY in the response headers to make a GET request to check the status of the job and the output. Below is an example of the value of the operation-location KEY in the response header of the POST request:
-`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/health/jobs/<jobID>`
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/health/jobs/<jobID>`
To check the job status, make a GET request to the URL in the value of the operation-location KEY header of the POST response. The following states are used to reflect the status of a job: `NotStarted`, `running`, `succeeded`, `failed`, `rejected`, `cancelling`, and `cancelled`.
-You can cancel a job with a `NotStarted` or `running` status with a DELETE HTTP call to the same URL as the GET request. More information on the DELETE call is available in the [Text Analytics for health hosted API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/CancelHealthJob).
+You can cancel a job with a `NotStarted` or `running` status with a DELETE HTTP call to the same URL as the GET request. More information on the DELETE call is available in the [Text Analytics for health hosted API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/CancelHealthJob).
The following is an example of the response of a GET request. The output is available for retrieval until the `expirationDateTime` (24 hours from the time the job was created) has passed after which the output is purged. ```json {
- "jobId": "be437134-a76b-4e45-829e-9b37dcd209bf",
- "lastUpdateDateTime": "2021-03-11T05:43:37Z",
- "createdDateTime": "2021-03-11T05:42:32Z",
- "expirationDateTime": "2021-03-12T05:42:32Z",
+ "jobId": "69081148-055b-4f92-977d-115df343de69",
+ "lastUpdateDateTime": "2021-07-06T19:06:03Z",
+ "createdDateTime": "2021-07-06T19:05:41Z",
+ "expirationDateTime": "2021-07-07T19:05:41Z",
"status": "succeeded", "errors": [], "results": {
The following is an example of the response of a GET request. The output is ava
"length": 13, "text": "intravenously", "category": "MedicationRoute",
- "confidenceScore": 1.0
+ "confidenceScore": 0.99
}, { "offset": 73, "length": 7, "text": "120 min", "category": "Time",
- "confidenceScore": 0.94
+ "confidenceScore": 0.98
} ], "relations": [
The following is an example of the response of a GET request. The output is ava
} ], "errors": [],
- "modelVersion": "2021-03-01"
+ "modelVersion": "2021-05-15"
} } ```
The following is an example of the response of a GET request. The output is ava
You can [use Postman](text-analytics-how-to-call-api.md) or the example cURL request below to submit a query to the container you deployed, replacing the `serverURL` variable with the appropriate value. Note the version of the API in the URL for the container is different than the hosted API. ```bash
-curl -X POST 'http://<serverURL>:5000/text/analytics/v3.1-preview.5/entities/health' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json
+curl -X POST 'http://<serverURL>:5000/text/analytics/v3.1/entities/health' --header 'Content-Type: application/json' --header 'accept: application/json' --data-binary @example.json
```
cognitive-services Text Analytics How To Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
You can call Text Analytics synchronously (for low latency scenarios). You have
## Using the API asynchronously
-The Text Analytics v3.1-preview.5 API provides two asynchronous endpoints:
+The Text Analytics v3.1 API provides two asynchronous endpoints:
* The `/analyze` endpoint for Text Analytics allows you to analyze the same set of text documents with multiple text analytics features in one API call. Previously, to use multiple features you would need to make separate API calls for each operation. Consider this capability when you need to analyze large sets of documents with more than one Text Analytics feature.
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
|`tasks` | Includes the following Text Analytics features: `entityRecognitionTasks`,`entityLinkingTasks`,`keyPhraseExtractionTasks`,`entityRecognitionPiiTasks` or `sentimentAnalysisTasks`. | Required | One or more of the Text Analytics features you want to use. Note that `entityRecognitionPiiTasks` has an optional `domain` parameter that can be set to `pii` or `phi` and the `pii-categories` for detection of selected entity types. If the `domain` parameter is unspecified, the system defaults to `pii`. Similarly `sentimentAnalysisTasks` has the `opinionMining` boolean parameter to include Opinion Mining results in the output for Sentiment Analysis. | |`parameters` | Includes the `model-version` and `stringIndexType` fields below | Required | This field is included within the above feature tasks that you choose. They contain information about the model version that you want to use and the index type. | |`model-version` | String | Required | Specify which version of the model being called that you want to use. |
-|`stringIndexType` | String | Required | Specify the text decoder that matches your programming environment. Types supported are `textElement_v8` (default), `unicodeCodePoint`, `utf16CodeUnit`. Please see the [Text offsets article](../concepts/text-offsets.md#offsets-in-api-version-31-preview) for more information. |
+|`stringIndexType` | String | Required | Specify the text decoder that matches your programming environment. Types supported are `textElement_v8` (default), `unicodeCodePoint`, `utf16CodeUnit`. Please see the [Text offsets article](../concepts/text-offsets.md#offsets-in-api-version-31) for more information. |
|`domain` | String | Optional | Only applies as a parameter to the `entityRecognitionPiiTasks` task and can be set to `pii` or `phi`. It defaults to `pii` if unspecified. | ```json
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
{ "parameters": { "model-version": "latest",
- "stringIndexType": "TextElement_v8",
"loggingOptOut": "false" } }
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
{ "parameters": { "model-version": "latest",
- "stringIndexType": "TextElement_v8",
"loggingOptOut": "true", "domain": "phi", "piiCategories":["default"]
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
{ "parameters": { "model-version": "latest",
- "stringIndexType": "TextElement_v8",
"loggingOptOut": "false" } }
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
{ "parameters": { "model-version": "latest",
- "stringIndexType": "TextElement_v8",
"loggingOptOut": "false", "opinionMining": "true" }
The `/analyze` endpoint lets you choose which of the supported Text Analytics fe
### Asynchronous requests to the `/health` endpoint
-The format for API requests to the Text Analytics for health hosted API is the same as for its container. Documents are submitted in a JSON object as raw unstructured text. XML is not supported. The JSON schema consists of the elements described below. Please fill out and submit the [Cognitive Services request form](https://aka.ms/csgate) to request access to the Text Analytics for health public preview. You will not be billed for Text Analytics for health usage.
+The format for API requests to the Text Analytics for health hosted API is the same as for its container. Documents are submitted in a JSON object as raw unstructured text. XML is not supported. The JSON schema consists of the elements described below. Please fill out and submit the [Cognitive Services request form](https://aka.ms/csgate) to request access to Text Analytics for health.
| Element | Valid values | Required? | Usage | ||--|--|-|
example.json
In Postman (or another web API test tool), add the endpoint for the feature you want to use. Use the table below to find the appropriate endpoint format, and replace `<your-text-analytics-resource>` with your resource endpoint. For example:
-`https://my-resource.cognitiveservices.azure.com/text/analytics/v3.0/languages`
+> [!TIP]
+> You can call v3.0 of the below synchronous endpoints by replacing `/v3.1` with `/v3.0/`.
+
+`https://my-resource.cognitiveservices.azure.com/text/analytics/v3.1/languages`
#### [Synchronous](#tab/synchronous)
In Postman (or another web API test tool), add the endpoint for the feature you
| Feature | Request type | Resource endpoints | |--|--|--|
-| Language Detection | POST | `<your-text-analytics-resource>/text/analytics/v3.0/languages` |
-| Sentiment Analysis | POST | `<your-text-analytics-resource>/text/analytics/v3.0/sentiment` |
-| Opinion Mining | POST | `<your-text-analytics-resource>/text/analytics/v3.1-preview.5/sentiment?opinionMining=true` |
-| Key Phrase Extraction | POST | `<your-text-analytics-resource>/text/analytics/v3.0/keyPhrases` |
-| Named Entity Recognition - General | POST | `<your-text-analytics-resource>/text/analytics/v3.0/entities/recognition/general` |
-| Named Entity Recognition - PII | POST | `<your-text-analytics-resource>/text/analytics/v3.1-preview.5/entities/recognition/pii` |
-| Named Entity Recognition - PHI | POST | `<your-text-analytics-resource>/text/analytics/v3.1-preview.5/entities/recognition/pii?domain=phi` |
-| Entity Linking | POST | `<your-text-analytics-resource>/text/analytics/v3.0/entities/linking` |
+| Language Detection | POST | `<your-text-analytics-resource>/text/analytics/v3.1/languages` |
+| Sentiment Analysis | POST | `<your-text-analytics-resource>/text/analytics/v3.1/sentiment` |
+| Opinion Mining | POST | `<your-text-analytics-resource>/text/analytics/v3.1/sentiment?opinionMining=true` |
+| Key Phrase Extraction | POST | `<your-text-analytics-resource>/text/analytics/v3.1/keyPhrases` |
+| Named Entity Recognition - General | POST | `<your-text-analytics-resource>/text/analytics/v3.1/entities/recognition/general` |
+| Named Entity Recognition - PII | POST | `<your-text-analytics-resource>/text/analytics/v3.1/entities/recognition/pii` |
+| Named Entity Recognition - PHI | POST | `<your-text-analytics-resource>/text/analytics/v3.1/entities/recognition/pii?domain=phi` |
+| Entity Linking | POST | `<your-text-analytics-resource>/text/analytics/v3.1/entities/linking` |
#### [Asynchronous](#tab/asynchronous)
In Postman (or another web API test tool), add the endpoint for the feature you
| Feature | Request type | Resource endpoints | |--|--|--|
-| Submit analysis job | POST | `https://<your-text-analytics-resource>/text/analytics/v3.1-preview.5/analyze` |
-| Get analysis status and results | GET | `https://<your-text-analytics-resource>/text/analytics/v3.1-preview.5/analyze/jobs/<Operation-Location>` |
+| Submit analysis job | POST | `https://<your-text-analytics-resource>/text/analytics/v3.1/analyze` |
+| Get analysis status and results | GET | `https://<your-text-analytics-resource>/text/analytics/v3.1/analyze/jobs/<Operation-Location>` |
### Endpoints for sending asynchronous requests to the `/health` endpoint | Feature | Request type | Resource endpoints | |--|--|--|
-| Submit Text Analytics for health job | POST | `https://<your-text-analytics-resource>/text/analytics/v3.1-preview.5/entities/health/jobs` |
-| Get job status and results | GET | `https://<your-text-analytics-resource>/text/analytics/v3.1-preview.5/entities/health/jobs/<Operation-Location>` |
-| Cancel job | DELETE | `https://<your-text-analytics-resource>/text/analytics/v3.1-preview.5/entities/health/jobs/<Operation-Location>` |
+| Submit Text Analytics for health job | POST | `https://<your-text-analytics-resource>/text/analytics/v3.1/entities/health/jobs` |
+| Get job status and results | GET | `https://<your-text-analytics-resource>/text/analytics/v3.1/entities/health/jobs/<Operation-Location>` |
+| Cancel job | DELETE | `https://<your-text-analytics-resource>/text/analytics/v3.1/entities/health/jobs/<Operation-Location>` |
Submit the API request. If you made the call to a synchronous endpoint, the resp
If you made the call to the asynchronous `/analyze` or `/health` endpoints, check that you received a 202 response code. you will need to get the response to view the results: 1. In the API response, find the `Operation-Location` from the header, which identifies the job you sent to the API.
-2. Create a GET request for the endpoint you used. refer to the [table above](#set-up-a-request) for the endpoint format, and review the [API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/AnalyzeStatus). For example:
+2. Create a GET request for the endpoint you used. refer to the [table above](#set-up-a-request) for the endpoint format, and review the [API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/AnalyzeStatus). For example:
- `https://my-resource.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/analyze/jobs/<Operation-Location>`
+ `https://my-resource.cognitiveservices.azure.com/text/analytics/v3.1/analyze/jobs/<Operation-Location>`
3. Add the `Operation-Location` to the request.
cognitive-services Text Analytics How To Entity Linking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking.md
The PII feature is part of NER and it can identify and redact sensitive entities
## Named Entity Recognition features and versions
-| Feature | NER v3.0 | NER v3.1-preview.5 |
+| Feature | NER v3.0 | NER v3.1 |
|--|--|-| | Methods for single, and batch requests | X | X | | Expanded entity recognition across several categories | X | X |
See [language support](../language-support.md) for information.
Named Entity Recognition v3 provides expanded detection across multiple types. Currently, NER v3.0 can recognize entities in the [general entity category](../named-entity-types.md).
-Named Entity Recognition v3.1-preview.5 includes the detection capabilities of v3.0, and:
-* The ability to detect personal information (`PII`) using the `v3.1-preview.5/entities/recognition/pii` endpoint.
+Named Entity Recognition v3.1 includes the detection capabilities of v3.0, and:
+* The ability to detect personal information (`PII`) using the `v3.1/entities/recognition/pii` endpoint.
* An optional `domain=phi` parameter to detect confidential health information (`PHI`). * [Asynchronous operation](text-analytics-how-to-call-api.md) using the `/analyze` endpoint.
Create a POST request. You can [use Postman](text-analytics-how-to-call-api.md)
### Request endpoints
-#### [Version 3.1-preview](#tab/version-3-preview)
+#### [Version 3.1](#tab/version-3-1)
-Named Entity Recognition `v3.1-preview.5` uses separate endpoints for NER, PII, and entity linking requests. Use a URL format below based on your request.
+Named Entity Recognition `v3.1` uses separate endpoints for NER, PII, and entity linking requests. Use a URL format below based on your request.
**Entity linking**
-* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/linking`
+* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/linking`
-[Named Entity Recognition version 3.1-preview reference for `Linking`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-Preview-5/operations/EntitiesLinking)
+[Named Entity Recognition version 3.1 reference for `Linking`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/EntitiesLinking)
**Named Entity Recognition**
-* General entities - `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/recognition/general`
+* General entities - `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/general`
-[Named Entity Recognition version 3.1-preview reference for `General`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-Preview-5/operations/EntitiesRecognitionGeneral)
+[Named Entity Recognition version 3.1 reference for `General`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/EntitiesRecognitionGeneral)
**Personally Identifiable Information (PII)**
-* Personal (`PII`) information - `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/recognition/pii`
+* Personal (`PII`) information - `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii`
You can also use the optional `domain=phi` parameter to detect health (`PHI`) information in text.
-`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/recognition/pii?domain=phi`
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii?domain=phi`
-Starting in `v3.1-preview.5`, The JSON response includes a `redactedText` property, which contains the modified input text where the detected PII entities are replaced by an `*` for each character in the entities.
+Starting in `v3.1`, The JSON response includes a `redactedText` property, which contains the modified input text where the detected PII entities are replaced by an `*` for each character in the entities.
-[Named Entity Recognition version 3.1-preview reference for `PII`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-Preview-5/operations/EntitiesRecognitionPii)
+[Named Entity Recognition version 3.1 reference for `PII`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/EntitiesRecognitionPii)
The API will attempt to detect the [listed entity categories](../named-entity-types.md?tabs=personal) for a given document language. If you want to specify which entities will be detected and returned, use the optional `piiCategories` parameter with the appropriate entity categories. This parameter can also let you detect entities that aren't enabled by default for your document language. The following example would detect a French driver's license number that might occur in English text, along with the default English entities. > [!TIP] > If you don't include `default` when specifying entity categories, The API will only return the entity categories you specify.
-`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/entities/recognition/pii?piiCategories=default,FRDriversLicenseNumber`
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/pii?piiCategories=default,FRDriversLicenseNumber`
**Asynchronous operation**
-Starting in `v3.1-preview.5`, You can send NER and entity linking requests asynchronously using the `/analyze` endpoint.
+Starting in `v3.1`, You can send NER and entity linking requests asynchronously using the `/analyze` endpoint.
-* Asynchronous operation - `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/analyze`
+* Asynchronous operation - `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/analyze`
See [How to call the Text Analytics API](text-analytics-how-to-call-api.md) for information on sending asynchronous requests.
Named Entity Recognition v3 uses separate endpoints for NER and entity linking r
**Entity linking** * `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/entities/linking`
-[Named Entity Recognition version 3.0 reference for `Linking`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/EntitiesRecognitionGeneral)
+[Named Entity Recognition version 3.1 reference for `Linking`](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/EntitiesRecognitionGeneral)
**Named Entity Recognition** * `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/entities/recognition/general`
Set a request header to include your Text Analytics API key. In the request body
## Example requests
-#### [Version 3.1-preview](#tab/version-3-preview)
+#### [Version 3.1](#tab/version-3-1)
### Example synchronous NER request
The following JSON is an example of content you might send to the API. The reque
} ```
+### Example synchronous PII request
+
+The following JSON is an example of content you might send to the API to detect PII in text.
+
+```json
+{
+ "documents": [
+ {
+ "id": "1",
+ "language": "en",
+ "text": "You can even pre-order from their online menu at www.contososteakhouse.com, call 312-555-0176 or send email to order@contososteakhouse.com!"
+ }
+ ]
+}
+```
+ ### Example asynchronous NER request If you use the `/analyze` endpoint for [asynchronous operation](text-analytics-how-to-call-api.md), you will get a response containing the tasks you sent to the API.
If you use the `/analyze` endpoint for [asynchronous operation](text-analytics-h
"entityRecognitionTasks": [ { "parameters": {
- "model-version": "latest",
- "stringIndexType": "TextElements_v8"
+ "model-version": "latest"
} } ],
Output is returned immediately. You can stream the results to an application tha
Version 3 provides separate endpoints for general NER, PII, and entity linking. Version 3.1-pareview includes an asynchronous Analyze mode. The responses for these operations are below.
-#### [Version 3.1-preview](#tab/version-3-preview)
+#### [Version 3.1](#tab/version-3-1)
### Synchronous example results
Example of a general NER response:
```json {
- "documents": [
- {
- "id": "1",
- "entities": [
- {
- "text": "tour guide",
- "category": "PersonType",
- "offset": 4,
- "length": 10,
- "confidenceScore": 0.45
- },
+ "documents": [
{
- "text": "Space Needle",
- "category": "Location",
- "offset": 30,
- "length": 12,
- "confidenceScore": 0.38
- },
- {
- "text": "trip",
- "category": "Event",
- "offset": 54,
- "length": 4,
- "confidenceScore": 0.78
- },
- {
- "text": "Seattle",
- "category": "Location",
- "subcategory": "GPE",
- "offset": 62,
- "length": 7,
- "confidenceScore": 0.78
- },
- {
- "text": "last week",
- "category": "DateTime",
- "subcategory": "DateRange",
- "offset": 70,
- "length": 9,
- "confidenceScore": 0.8
+ "id": "1",
+ "entities": [
+ {
+ "text": "tour guide",
+ "category": "PersonType",
+ "offset": 4,
+ "length": 10,
+ "confidenceScore": 0.94
+ },
+ {
+ "text": "Space Needle",
+ "category": "Location",
+ "offset": 30,
+ "length": 12,
+ "confidenceScore": 0.96
+ },
+ {
+ "text": "Seattle",
+ "category": "Location",
+ "subcategory": "GPE",
+ "offset": 62,
+ "length": 7,
+ "confidenceScore": 1.0
+ },
+ {
+ "text": "last week",
+ "category": "DateTime",
+ "subcategory": "DateRange",
+ "offset": 70,
+ "length": 9,
+ "confidenceScore": 0.8
+ }
+ ],
+ "warnings": []
}
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2020-04-01"
+ ],
+ "errors": [],
+ "modelVersion": "2021-06-01"
} ```
Example of a PII response:
```json {
- "documents": [
- {
- "redactedText": "You can even pre-order from their online menu at *************************, call ************ or send email to ***************************!",
- "id": "0",
- "entities": [
- {
- "text": "www.contososteakhouse.com",
- "category": "URL",
- "offset": 49,
- "length": 25,
- "confidenceScore": 0.8
- },
- {
- "text": "312-555-0176",
- "category": "Phone Number",
- "offset": 81,
- "length": 12,
- "confidenceScore": 0.8
- },
+ "documents": [
{
- "text": "order@contososteakhouse.com",
- "category": "Email",
- "offset": 111,
- "length": 27,
- "confidenceScore": 0.8
+ "redactedText": "You can even pre-order from their online menu at www.contososteakhouse.com, call ************ or send email to ***************************!",
+ "id": "1",
+ "entities": [
+ {
+ "text": "312-555-0176",
+ "category": "PhoneNumber",
+ "offset": 81,
+ "length": 12,
+ "confidenceScore": 0.8
+ },
+ {
+ "text": "order@contososteakhouse.com",
+ "category": "Email",
+ "offset": 111,
+ "length": 27,
+ "confidenceScore": 0.8
+ },
+ {
+ "text": "contososteakhouse",
+ "category": "Organization",
+ "offset": 117,
+ "length": 17,
+ "confidenceScore": 0.45
+ }
+ ],
+ "warnings": []
}
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2020-07-01"
+ ],
+ "errors": [],
+ "modelVersion": "2021-01-15"
} ```
Example of an Entity linking response:
```json {
- "documents": [
- {
- "id": "1",
- "entities": [
+ "documents": [
{
- "bingId": "f8dd5b08-206d-2554-6e4a-893f51f4de7e",
- "name": "Space Needle",
- "matches": [
- {
- "text": "Space Needle",
- "offset": 30,
- "length": 12,
- "confidenceScore": 0.4
- }
- ],
- "language": "en",
- "id": "Space Needle",
- "url": "https://en.wikipedia.org/wiki/Space_Needle",
- "dataSource": "Wikipedia"
- },
- {
- "bingId": "5fbba6b8-85e1-4d41-9444-d9055436e473",
- "name": "Seattle",
- "matches": [
- {
- "text": "Seattle",
- "offset": 62,
- "length": 7,
- "confidenceScore": 0.25
- }
- ],
- "language": "en",
- "id": "Seattle",
- "url": "https://en.wikipedia.org/wiki/Seattle",
- "dataSource": "Wikipedia"
+ "id": "1",
+ "entities": [
+ {
+ "bingId": "f8dd5b08-206d-2554-6e4a-893f51f4de7e",
+ "name": "Space Needle",
+ "matches": [
+ {
+ "text": "Space Needle",
+ "offset": 30,
+ "length": 12,
+ "confidenceScore": 0.4
+ }
+ ],
+ "language": "en",
+ "id": "Space Needle",
+ "url": "https://en.wikipedia.org/wiki/Space_Needle",
+ "dataSource": "Wikipedia"
+ },
+ {
+ "bingId": "5fbba6b8-85e1-4d41-9444-d9055436e473",
+ "name": "Seattle",
+ "matches": [
+ {
+ "text": "Seattle",
+ "offset": 62,
+ "length": 7,
+ "confidenceScore": 0.25
+ }
+ ],
+ "language": "en",
+ "id": "Seattle",
+ "url": "https://en.wikipedia.org/wiki/Seattle",
+ "dataSource": "Wikipedia"
+ }
+ ],
+ "warnings": []
}
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2020-02-01"
+ ],
+ "errors": [],
+ "modelVersion": "2021-06-01"
} ```
Example of an Entity linking response:
```json {
- "displayName": "My Analyze Job",
- "jobId": "dbec96a8-ea22-4ad1-8c99-280b211eb59e_637408224000000000",
- "lastUpdateDateTime": "2020-11-13T04:01:14Z",
- "createdDateTime": "2020-11-13T04:01:13Z",
- "expirationDateTime": "2020-11-14T04:01:13Z",
- "status": "running",
- "errors": [],
- "tasks": {
- "details": {
- "name": "My Analyze Job",
- "lastUpdateDateTime": "2020-11-13T04:01:14Z"
- },
- "completed": 1,
- "failed": 0,
- "inProgress": 2,
- "total": 3,
- "keyPhraseExtractionTasks": [
- {
- "name": "My Analyze Job",
- "lastUpdateDateTime": "2020-11-13T04:01:14.3763516Z",
- "results": {
- "inTerminalState": true,
- "documents": [
- {
- "id": "doc1",
- "keyPhrases": [
- "sunny outside"
- ],
- "warnings": []
- },
- {
- "id": "doc2",
- "keyPhrases": [
- "favorite Seattle attraction",
- "Pike place market"
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2020-07-01"
- }
- }
- ]
- }
+ "jobId": "f480e1f9-0b61-4d47-93da-240f084582cf",
+ "lastUpdateDateTime": "2021-07-06T19:03:15Z",
+ "createdDateTime": "2021-07-06T19:02:47Z",
+ "expirationDateTime": "2021-07-07T19:02:47Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "My Job",
+ "tasks": {
+ "completed": 2,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 2,
+ "entityRecognitionTasks": [
+ {
+ "lastUpdateDateTime": "2021-07-06T19:03:15.212633Z",
+ "taskName": "NamedEntityRecognition_latest",
+ "state": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "id": "doc1",
+ "entities": [],
+ "warnings": []
+ },
+ {
+ "id": "doc2",
+ "entities": [
+ {
+ "text": "Pike place market",
+ "category": "Location",
+ "offset": 0,
+ "length": 17,
+ "confidenceScore": 0.95
+ },
+ {
+ "text": "Seattle",
+ "category": "Location",
+ "subcategory": "GPE",
+ "offset": 33,
+ "length": 7,
+ "confidenceScore": 0.99
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-06-01"
+ }
+ }
+ ],
+ "entityRecognitionPiiTasks": [
+ {
+ "lastUpdateDateTime": "2021-07-06T19:03:03.2063832Z",
+ "taskName": "PersonallyIdentifiableInformation_latest",
+ "state": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "redactedText": "It's incredibly sunny outside! I'm so happy",
+ "id": "doc1",
+ "entities": [],
+ "warnings": []
+ },
+ {
+ "redactedText": "Pike place market is my favorite Seattle attraction.",
+ "id": "doc2",
+ "entities": [],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-01-15"
+ }
+ }
+ ]
+ }
} ```
cognitive-services Text Analytics How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md
Containers enable you to run the Text Analytic APIs in your own environment and
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. > [!IMPORTANT]
-> The free account is limited to 5,000 transactions per month and only the **Free** and **Standard** <a href="https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics" target="_blank">pricing tiers </a> are valid for containers. For more information on transaction request rates, see [Data Limits](../concepts/data-limits.md).
+> The free account is limited to 5,000 text records per month and only the **Free** and **Standard** <a href="https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics" target="_blank">pricing tiers </a> are valid for containers. For more information on transaction request rates, see [Data Limits](../concepts/data-limits.md).
## Prerequisites
Container images for Text Analytics are available on the Microsoft Container Reg
[!INCLUDE [docker-pull-language-detection-container](../includes/docker-pull-language-detection-container.md)]
-# [Text Analytics for health (preview)](#tab/healthcare)
+# [Text Analytics for health](#tab/healthcare)
[!INCLUDE [docker-pull-health-container](../includes/docker-pull-health-container.md)]
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
[!INCLUDE [docker-run-language-detection-container](../includes/docker-run-language-detection-container.md)]
-# [Text Analytics for health (preview)](#tab/healthcare)
+# [Text Analytics for health](#tab/healthcare)
[!INCLUDE [docker-run-health-container](../includes/docker-run-health-container.md)]
In this article, you learned concepts and workflow for downloading, installing,
* *Sentiment Analysis* * *Key Phrase Extraction (preview)* * *Language Detection*
- * *Text Analytics for health (preview)*
-* Container images are downloaded from the Microsoft Container Registry (MCR) or preview container repository.
+ * *Text Analytics for health*
+* Container images are downloaded from the Microsoft Container Registry (MCR)
* Container images run in Docker. * You can use either the REST API or SDK to call operations in Text Analytics containers by specifying the host URI of the container. * You must specify billing information when instantiating a container.
cognitive-services Text Analytics How To Keyword Extraction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-keyword-extraction.md
Previously updated : 03/29/2021 Last updated : 07/06/2021 # Example: How to extract key phrases using Text Analytics
-The [Key Phrase Extraction API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/KeyPhrases) evaluates unstructured text, and for each JSON document, returns a list of key phrases.
+The [Key Phrase Extraction API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/KeyPhrases) evaluates unstructured text, and for each JSON document, returns a list of key phrases.
This capability is useful if you need to quickly identify the main points in a collection of documents. For example, given input text "The food was delicious and there were wonderful staff", the service returns the main talking points: "food" and "wonderful staff".
See [How to call the Text Analytics API](text-analytics-how-to-call-api.md) for
### Example asynchronous request object
-Starting in `v3.1-preview.3`, You can send NER requests asynchronously using the `/analyze` endpoint.
+Starting in `v3.1`, You can send NER requests asynchronously using the `/analyze` endpoint.
```json
Starting in `v3.1-preview.3`, You can send NER requests asynchronously using the
For information about request definition, see [How to call the Text Analytics API](text-analytics-how-to-call-api.md). The following points are restated for convenience:
-+ Create a **POST** request. Review the API documentation for this request: [Key Phrases API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/KeyPhrases).
++ Create a **POST** request. Review the API documentation for this request: [Key Phrases API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/KeyPhrases).
-+ Set the HTTP endpoint for key phrase extraction by using either a Text Analytics resource on Azure or an instantiated [Text Analytics container](text-analytics-how-to-install-containers.md). if you're using the API synchronously, you must include `/text/analytics/v3.0/keyPhrases` in the URL. For example: `https://<your-custom-subdomain>.api.cognitiveservices.azure.com/text/analytics/v3.0/keyPhrases`.
++ Set the HTTP endpoint for key phrase extraction by using either a Text Analytics resource on Azure or an instantiated [Text Analytics container](text-analytics-how-to-install-containers.md). if you're using the API synchronously, you must include `/text/analytics/v3.1/keyPhrases` in the URL. For example: `https://<your-custom-subdomain>.api.cognitiveservices.azure.com/text/analytics/v3.1/keyPhrases`. + Set a request header to include the [access key](../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) for Text Analytics operations. + In the request body, provide the JSON documents collection you prepared for this analysis. > [!Tip]
-> Use [Postman](text-analytics-how-to-call-api.md) or open the **API testing console** in the [documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/KeyPhrases) to structure a request and POST it to the service.
+> Use [Postman](text-analytics-how-to-call-api.md) or open the **API testing console** in the [documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/KeyPhrases) to structure a request and POST it to the service.
## Step 2: Post the request
All POST requests return a JSON formatted response with the IDs and detected pro
Output is returned immediately. You can stream the results to an application that accepts JSON or save the output to a file on the local system, and then import it into an application that allows you to sort, search, and manipulate the data.
-An example of the output for key phrase extraction from the v3.1-preview endpoint is shown here:
+An example of the output for key phrase extraction from the v3.1 endpoint is shown here:
### Synchronous result ```json
- {
- "documents":[
- {
- "id":"1",
- "keyPhrases":[
- "year",
+{
+ "documents": [
+ {
+ "id": "1",
+ "keyPhrases": [
"trail", "trip", "views", "hike"
- ],
- "warnings":[]
- },
- {
- "id":"2",
- "keyPhrases":[
- "marked trails",
+ ],
+ "warnings": []
+ },
+ {
+ "id": "2",
+ "keyPhrases": [
"Worst hike",
- "goners"
- ],
- "warnings":[]
- },
- {
- "id":"3",
- "keyPhrases":[
- "trail",
+ "trails"
+ ],
+ "warnings": []
+ },
+ {
+ "id": "3",
+ "keyPhrases": [
+ "less athletic",
"small children",
- "family"
- ],
- "warnings":[]
- },
- {
- "id":"4",
- "keyPhrases":[
+ "Everyone",
+ "family",
+ "trail"
+ ],
+ "warnings": []
+ },
+ {
+ "id": "4",
+ "keyPhrases": [
"spectacular views", "trail",
- "Worth",
"area"
- ],
- "warnings":[]
- },
- {
- "id":"5",
- "keyPhrases":[
- "places",
- "beautiful views",
+ ],
+ "warnings": []
+ },
+ {
+ "id": "5",
+ "keyPhrases": [
"favorite trail",
- "rest"
- ],
- "warnings":[]
- }
- ],
- "errors":[],
- "modelVersion":"2020-07-01"
- }
-
+ "beautiful views",
+ "many places"
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-06-01"
+}
``` As noted, the analyzer finds and discards non-essential words, and it keeps single terms or phrases that appear to be the subject or object of a sentence.
If you use the `/analyze` endpoint for asynchronous operation, you will get a re
```json {
- "displayName": "My Analyze Job",
- "jobId": "dbec96a8-ea22-4ad1-8c99-280b211eb59e_637408224000000000",
- "lastUpdateDateTime": "2020-11-13T04:01:14Z",
- "createdDateTime": "2020-11-13T04:01:13Z",
- "expirationDateTime": "2020-11-14T04:01:13Z",
- "status": "running",
- "errors": [],
- "tasks": {
- "details": {
- "name": "My Analyze Job",
- "lastUpdateDateTime": "2020-11-13T04:01:14Z"
- },
- "completed": 1,
- "failed": 0,
- "inProgress": 2,
- "total": 3,
- "keyPhraseExtractionTasks": [
- {
- "name": "My Analyze Job",
- "lastUpdateDateTime": "2020-11-13T04:01:14.3763516Z",
- "results": {
- "inTerminalState": true,
- "documents": [
- {
- "id": "doc1",
- "keyPhrases": [
- "sunny outside"
- ],
- "warnings": []
- },
- {
- "id": "doc2",
- "keyPhrases": [
- "favorite Seattle attraction",
- "Pike place market"
- ],
- "warnings": []
- }
- ],
- "errors": [],
- "modelVersion": "2020-07-01"
- }
- }
- ]
- }
+ "jobId": "fa813c9a-0d96-4a34-8e4f-a2a6824f9190",
+ "lastUpdateDateTime": "2021-07-07T18:16:45Z",
+ "createdDateTime": "2021-07-07T18:16:15Z",
+ "expirationDateTime": "2021-07-08T18:16:15Z",
+ "status": "succeeded",
+ "errors": [],
+ "displayName": "My Job",
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "keyPhraseExtractionTasks": [
+ {
+ "lastUpdateDateTime": "2021-07-07T18:16:45.0623454Z",
+ "taskName": "KeyPhraseExtraction_latest",
+ "state": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "id": "doc1",
+ "keyPhrases": [],
+ "warnings": []
+ },
+ {
+ "id": "doc2",
+ "keyPhrases": [
+ "Pike place market",
+ "Seattle attraction",
+ "favorite"
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2021-06-01"
+ }
+ }
+ ]
+ }
} ```
If you use the `/analyze` endpoint for asynchronous operation, you will get a re
In this article, you learned concepts and workflow for key phrase extraction by using Text Analytics in Cognitive Services. In summary:
-+ [Key phrase extraction API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/KeyPhrases) is available for selected languages.
++ [Key phrase extraction API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/KeyPhrases) is available for selected languages. + JSON documents in the request body include an ID, text, and language code. + POST request is to a `/keyphrases` or `/analyze` endpoint, using a personalized [access key and an endpoint](../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that is valid for your subscription. + Response output, which consists of key words and phrases for each document ID, can be streamed to any app that accepts JSON, including Microsoft Office Excel and Power BI, to name a few.
cognitive-services Text Analytics How To Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection.md
Previously updated : 06/10/2021 Last updated : 07/02/2021 # Example: Detect language with Text Analytics
-The [Language Detection](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages) feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
+The [Language Detection](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Languages) feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The document size must be under 5,120 characters per document. You can have up t
For more information on request definition, see [Call the Text Analytics API](text-analytics-how-to-call-api.md). The following points are restated for convenience:
-+ Create a POST request. To review the API documentation for this request, see the [Language Detection API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages).
++ Create a POST request. To review the API documentation for this request, see the [Language Detection API](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Languages).
-+ Set the HTTP endpoint for language detection. Use either a Text Analytics resource on Azure or an instantiated [Text Analytics container](text-analytics-how-to-install-containers.md). You must include `/text/analytics/v3.0/languages` in the URL. For example: `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/languages`.
++ Set the HTTP endpoint for language detection. Use either a Text Analytics resource on Azure or an instantiated [Text Analytics container](text-analytics-how-to-install-containers.md). You must include `/text/analytics/v3.1/languages` in the URL. For example: `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/languages`. + Set a request header to include the [access key](../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) for Text Analytics operations. + In the request body, provide the JSON documents collection you prepared for this analysis. > [!Tip]
-> Use [Postman](text-analytics-how-to-call-api.md) or open the **API testing console** in the [documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages) to structure a request and POST it to the service.
+> Use [Postman](text-analytics-how-to-call-api.md) or open the **API testing console** in the [documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Languages) to structure a request and POST it to the service.
## Step 2: POST the request
All POST requests return a JSON-formatted response with the IDs and detected pro
Output is returned immediately. You can stream the results to an application that accepts JSON or save the output to a file on the local system. Then, import the output into an application that you can use to sort, search, and manipulate the data.
-Results for the example request should look like the following JSON document. Notice that it's one JSON document with multiple items with each item representing the detection result for every document you sumbit. Output is in English.
+Results for the example request should look like the following JSON document. Notice that it's one JSON document with multiple items with each item representing the detection result for every document you submit. Output is in English.
Language detection will return one predominant language for one document, along with it's [ISO 639-1](https://www.iso.org/standard/22109.html) name, friendly name and confidence score. A positive score of 1.0 expresses the highest possible confidence level of the analysis. ```json {
- "documents":[
+ "documents": [
{
- "detectedLanguage":{
- "confidenceScore":0.99,
- "iso6391Name":"en",
- "name":"English"
+ "id": "1",
+ "detectedLanguage": {
+ "name": "English",
+ "iso6391Name": "en",
+ "confidenceScore": 0.99
},
- "id":"1",
- "warnings":[
-
- ]
+ "warnings": []
}, {
- "detectedLanguage":{
- "confidenceScore":1.0,
- "iso6391Name":"es",
- "name":"Spanish"
+ "id": "2",
+ "detectedLanguage": {
+ "name": "Spanish",
+ "iso6391Name": "es",
+ "confidenceScore": 0.91
},
- "id":"2",
- "warnings":[
-
- ]
+ "warnings": []
}, {
- "detectedLanguage":{
- "confidenceScore":1.0,
- "iso6391Name":"fr",
- "name":"French"
+ "id": "3",
+ "detectedLanguage": {
+ "name": "French",
+ "iso6391Name": "fr",
+ "confidenceScore": 0.78
},
- "id":"3",
- "warnings":[
-
- ]
+ "warnings": []
}, {
- "detectedLanguage":{
- "confidenceScore":1.0,
- "iso6391Name":"zh_chs",
- "name":"Chinese_Simplified"
+ "id": "4",
+ "detectedLanguage": {
+ "name": "Chinese_Simplified",
+ "iso6391Name": "zh_chs",
+ "confidenceScore": 1.0
},
- "id":"4",
- "warnings":[
-
- ]
+ "warnings": []
}, {
- "detectedLanguage":{
- "confidenceScore":1.0,
- "iso6391Name":"ru",
- "name":"Russian"
+ "id": "5",
+ "detectedLanguage": {
+ "name": "Russian",
+ "iso6391Name": "ru",
+ "confidenceScore": 1.0
},
- "id":"5",
- "warnings":[
-
- ]
+ "warnings": []
} ],
- "errors":[
-
- ],
- "modelVersion":"2020-09-01"
+ "errors": [],
+ "modelVersion": "2021-01-05"
} ```
If the analyzer can't parse the input, it returns `(Unknown)`. An example is if
```json {
- "documents":[
+ "documents": [
{
- "detectedLanguage":{
- "confidenceScore":0.0,
- "iso6391Name":"(Unknown)",
- "name":"(Unknown)"
+ "id": "1",
+ "detectedLanguage": {
+ "name": "(Unknown)",
+ "iso6391Name": "(Unknown)",
+ "confidenceScore": 0.0
},
- "id":"1",
- "warnings":[
-
- ]
+ "warnings": []
} ],
- "errors":[
-
- ],
- "modelVersion":"2020-09-01"
+ "errors": [],
+ "modelVersion": "2021-01-05"
} ```
The resulting output consists of the predominant language, with a score of less
```json {
- "documents":[
+ "documents": [
{
- "detectedLanguage":{
- "confidenceScore":0.94,
- "iso6391Name":"es",
- "name":"Spanish"
+ "id": "1",
+ "detectedLanguage": {
+ "name": "Spanish",
+ "iso6391Name": "es",
+ "confidenceScore": 0.88
},
- "id":"1",
- "warnings":[
-
- ]
+ "warnings": []
} ],
- "errors":[
-
- ],
- "modelVersion":"2020-09-01"
+ "errors": [],
+ "modelVersion": "2021-01-05"
} ```
The resulting output consists of the predominant language, with a score of less
In this article, you learned concepts and workflow for language detection by using Text Analytics in Azure Cognitive Services. The following points were explained and demonstrated:
-+ [Language detection](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages) is available for a wide range of languages, variants, dialects, and some regional or cultural languages.
++ [Language detection](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Languages) is available for a wide range of languages, variants, dialects, and some regional or cultural languages. + JSON documents in the request body include an ID and text. + The POST request is to a `/languages` endpoint by using a personalized [access key and an endpoint](../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that's valid for your subscription. + Response output consists of language identifiers for each document ID. The output can be streamed to any app that accepts JSON. Example apps include Excel and Power BI, to name a few.
cognitive-services Text Analytics How To Sentiment Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md
Previously updated : 06/10/2021 Last updated : 07/07/2021
The AI models used by the API are provided by the service, you just have to send
## Sentiment Analysis versions and features
-| Feature | Sentiment Analysis v3 | Sentiment Analysis v3.1 (Preview) |
+| Feature | Sentiment Analysis v3.0 | Sentiment Analysis v3.1 |
|-|--|--| | Methods for single, and batch requests | X | X | | Sentiment Analysis scores and labeling | X | X |
Confidence scores range from 1 to 0. Scores closer to 1 indicate a higher confid
## Opinion Mining
-Opinion Mining is a feature of Sentiment Analysis, starting in the preview of version 3.1. Also known as Aspect-based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to attributes of products or services in text. The API surfaces opinions as a target (noun or verb) and an assessment (adjective).
+Opinion Mining is a feature of Sentiment Analysis, starting in version 3.1. Also known as Aspect-based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to attributes of products or services in text. The API surfaces opinions as a target (noun or verb) and an assessment (adjective).
For example, if a customer leaves feedback about a hotel such as "The room was great, but the staff was unfriendly.", Opinion Mining will locate targets (aspects) in the text, and their associated assessments (opinions) and sentiments. Sentiment Analysis might only report a negative sentiment.
To get Opinion Mining in your results, you must include the `opinionMining=true`
Sentiment analysis produces a higher-quality result when you give it smaller amounts of text to work on. This is opposite from key phrase extraction, which performs better on larger blocks of text. To get the best results from both operations, consider restructuring the inputs accordingly.
-You must have JSON documents in this format: ID, text, and language. Sentiment Analysis supports a wide range of languages, with more in preview. For more information, see [Supported languages](../language-support.md).
+You must have JSON documents in this format: ID, text, and language. Sentiment Analysis supports a wide range of languages. For more information, see [Supported languages](../language-support.md).
Document size must be under 5,120 characters per document. For the maximum number of documents permitted in a collection, see the [data limits](../concepts/data-limits.md?tabs=version-3) article under Concepts. The collection is submitted in the body of the request.
Document size must be under 5,120 characters per document. For the maximum numbe
Create a POST request. You can [use Postman](text-analytics-how-to-call-api.md) or the **API testing console** in the following reference links to quickly structure and send one.
-#### [Version 3.1-preview](#tab/version-3-1)
+#### [Version 3.1](#tab/version-3-1)
-[Sentiment Analysis v3.1 reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/Sentiment)
+[Sentiment Analysis v3.1 reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
#### [Version 3.0](#tab/version-3)
Set the HTTPS endpoint for sentiment analysis by using either a Text Analytics r
> [!NOTE] > You can find your key and endpoint for your Text Analytics resource on the Azure portal. They will be located on the resource's **Quick start** page, under **resource management**.
-#### [Version 3.1-preview](#tab/version-3-1)
+#### [Version 3.1](#tab/version-3-1)
**Sentiment Analysis**
-`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/sentiment`
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/sentiment`
**Opinion Mining** To get Opinion Mining results, you must include the `opinionMining=true` parameter. For example:
-`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1-preview.5/sentiment?opinionMining=true`
+`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/sentiment?opinionMining=true`
This parameter is set to `false` by default.
Set a request header to include your Text Analytics API key. In the request body
### Example request for Sentiment Analysis and Opinion Mining
-The following is an example of content you might submit for sentiment analysis. The request format is the same for both `v3.0` and `v3.1-preview`.
+The following is an example of content you might submit for sentiment analysis. The request format is the same for both `v3.0` and `v3.1`.
```json {
The Text Analytics API is stateless. No data is stored in your account, and resu
Output is returned immediately. You can stream the results to an application that accepts JSON or save the output to a file on the local system. Then, import the output into an application that you can use to sort, search, and manipulate the data. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/text-offsets.md) for more information.
-#### [Version 3.1-preview](#tab/version-3-1)
+#### [Version 3.1](#tab/version-3-1)
### Sentiment Analysis and Opinion Mining example response
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
Previously updated : 06/17/2021 Last updated : 07/06/2021 # Text Analytics API v3 language support
| Spanish | `es` | Γ£ô | 2019-10-01 | | | Turkish | `tr` | Γ£ô | 2020-04-01 | |
-### Opinion mining (v3.1-preview only)
+### Opinion mining (v3.1 only)
| Language | Language code | Starting with v3 model version: | Notes | |:-|:-:|::|-:|
cognitive-services Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/migration-guide.md
Previously updated : 05/21/2021 Last updated : 07/06/2021 # Migrate to version 3.x of the Text Analytics API
-If you're using version 2.1 of the Text Analytics API, this article will help you upgrade your application to use version 3.x. Version 3.0 is generally available and introduces new features such as expanded [Named Entity Recognition (NER)](how-tos/text-analytics-how-to-entity-linking.md#named-entity-recognition-features-and-versions) and [model versioning](concepts/model-versioning.md). A preview version of v3.1 (v3.1-preview.x) is also available, which adds features such as [opinion mining](how-tos/text-analytics-how-to-sentiment-analysis.md#sentiment-analysis-versions-and-features). The models used in v2 will not receive future updates.
+If you're using version 2.1 of the Text Analytics API, this article will help you upgrade your application to use version 3.x. Version 3.1 and 3.0 are generally available and introduce new features such as expanded [Named Entity Recognition (NER)](how-tos/text-analytics-how-to-entity-linking.md#named-entity-recognition-features-and-versions) and [model versioning](concepts/model-versioning.md). Version of v3.1 is also available, which adds features such as [opinion mining](how-tos/text-analytics-how-to-sentiment-analysis.md#sentiment-analysis-versions-and-features) and [Personally Identifying Information](how-tos/text-analytics-how-to-entity-linking.md?tabs=version-3-1#personally-identifiable-information-pii) detection. The models used in v2 or 3.1-preview.x will not receive future updates.
## [Sentiment analysis](#tab/sentiment-analysis)
+> [!TIP]
+> Want to use the latest version of the API in your application? See the [sentiment analysis](how-tos/text-analytics-how-to-sentiment-analysis.md) how-to article and [quickstart](quickstarts/client-libraries-rest-api.md) for information on the current version of the API.
+ ### Feature changes Sentiment Analysis in version 2.1 returns sentiment scores between 0 and 1 for each document sent to the API, with scores closer to 1 indicating more positive sentiment. Version 3 instead returns sentiment labels (such as "positive" or "negative") for both the sentences and the document as a whole, and their associated confidence scores.
Sentiment Analysis in version 2.1 returns sentiment scores between 0 and 1 for e
#### REST API
-If your application uses the REST API, update its request endpoint to the v3 endpoint for sentiment analysis. For example:`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/sentiment`. You will also need to update the application to use the sentiment labels returned in the [API's response](how-tos/text-analytics-how-to-sentiment-analysis.md#view-the-results).
+If your application uses the REST API, update its request endpoint to the v3 endpoint for sentiment analysis. For example:`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/sentiment`. You will also need to update the application to use the sentiment labels returned in the [API's response](how-tos/text-analytics-how-to-sentiment-analysis.md#view-the-results).
See the reference documentation for examples of the JSON response. * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c9) * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Sentiment)
-* [Version 3.1-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/Sentiment)
+* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
#### Client libraries
See the reference documentation for examples of the JSON response.
## [NER and entity linking](#tab/named-entity-recognition)
+> [!TIP]
+> Want to use the latest version of the API in your application? See the [NER and entity linking](how-tos/text-analytics-how-to-entity-linking.md) how-to article and [quickstart](quickstarts/client-libraries-rest-api.md) for information on the current version of the API.
+ ### Feature changes
-In version 2.1, the Text Analytics API uses one endpoint for Named Entity Recognition (NER) and entity linking. Version 3 provides expanded named entity detection, and uses separate endpoints for NER and entity linking requests. Starting in v3.1-preview.1, NER can additionally detect personal `pii` and health `phi` information.
+In version 2.1, the Text Analytics API uses one endpoint for Named Entity Recognition (NER) and entity linking. Version 3 provides expanded named entity detection, and uses separate endpoints for NER and entity linking requests. In v3.1, NER can additionally detect personal `pii` and health `phi` information.
### Steps to migrate
In version 2.1, the Text Analytics API uses one endpoint for Named Entity Recogn
If your application uses the REST API, update its request endpoint to the v3 endpoints for NER and/or entity linking. Entity Linking
-* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/entities/linking`
+* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/linking`
NER
-* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/entities/recognition/general`
+* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/general`
You will also need to update your application to use the [entity categories](named-entity-types.md) returned in the [API's response](how-tos/text-analytics-how-to-entity-linking.md#view-results). See the reference documentation for examples of the JSON response. * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/5ac4251d5b4ccd1554da7634) * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/EntitiesRecognitionGeneral)
-* [Version 3.1-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/EntitiesRecognitionGeneral)
+* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/EntitiesRecognitionGeneral)
#### Client libraries
The following table lists the entity categories returned for NER v2.1.
## [Language detection](#tab/language-detection)
+> [!TIP]
+> Want to use the latest version of the API in your application? See the [language detection](how-tos/text-analytics-how-to-language-detection.md) how-to article and [quickstart](quickstarts/client-libraries-rest-api.md) for information on the current version of the API.
+ ### Feature changes The language detection feature output has changed in v3. The JSON response will contain `ConfidenceScore` instead of `score`. V3 also only returns one language in a `detectedLanguage` attribute for each document.
The language detection feature output has changed in v3. The JSON response will
#### REST API
-If your application uses the REST API, update its request endpoint to the v3 endpoint for language detection. For example:`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.0/languages`. You will also need to update the application to use `ConfidenceScore` instead of `score` in the [API's response](how-tos/text-analytics-how-to-language-detection.md#step-3-view-the-results).
+If your application uses the REST API, update its request endpoint to the v3 endpoint for language detection. For example:`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/languages`. You will also need to update the application to use `ConfidenceScore` instead of `score` in the [API's response](how-tos/text-analytics-how-to-language-detection.md#step-3-view-the-results).
See the reference documentation for examples of the JSON response. * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c7) * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages)
-* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/Languages)
+* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Languages)
#### Client libraries
See the reference documentation for examples of the JSON response.
## [Key phrase extraction](#tab/key-phrase-extraction)
+> [!TIP]
+> Want to use the latest version of the API in your application? See the [key phrase extraction](how-tos/text-analytics-how-to-keyword-extraction.md) how-to article and [quickstart](quickstarts/client-libraries-rest-api.md) for information on the current version of the API.
+ ### Feature changes The key phrase extraction feature has not changed in v3 outside of the endpoint version.
If your application uses the REST API, update its request endpoint to the v3 end
See the reference documentation for examples of the JSON response. * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c6) * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/KeyPhrases)
-* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/operations/KeyPhrases)
+* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/KeyPhrases)
#### Client libraries
cognitive-services Named Entity Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/named-entity-types.md
Previously updated : 06/03/2021 Last updated : 06/08/2021
Use this article to find the entity categories that can be returned by [Named Entity Recognition](how-tos/text-analytics-how-to-entity-linking.md) (NER). NER runs a predictive model to identify and categorize named entities from an input document.
-A preview of NER v3.1 is also available, which includes the ability to detect personal (`PII`) and health (`PHI`) information. Additionally, click on the **Health** tab to see a list of supported categories in Text Analytics for health.
+NER v3.1 is also available, which includes the ability to detect personal (`PII`) and health (`PHI`) information. Additionally, click on the **Health** tab to see a list of supported categories in Text Analytics for health.
You can find a list of types returned by version 2.1 in the [migration guide](migration-guide.md?tabs=named-entity-recognition)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/overview.md
The Text Analytics API is a cloud-based service that provides Natural Language Processing (NLP) features for text mining and text analysis, including: sentiment analysis, opinion mining, key phrase extraction, language detection, and named entity recognition.
-The API is a part of [Azure Cognitive Services](../index.yml), a collection of machine learning and AI algorithms in the cloud for your development projects. You can use these features with the REST API [version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-V3-0/) or [version 3.1-preview](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1-preview-5/), or the [client library](quickstarts/client-libraries-rest-api.md).
+The API is a part of [Azure Cognitive Services](../index.yml), a collection of machine learning and AI algorithms in the cloud for your development projects. You can use these features with the REST API [version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-V3-0/) or [version 3.1](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1), or the [client library](quickstarts/client-libraries-rest-api.md).
> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Whats-New-in-Text-Analytics-Opinion-Mining-and-Async-API/player]
This documentation contains the following types of articles:
## Sentiment analysis
-Use [sentiment analysis](how-tos/text-analytics-how-to-sentiment-analysis.md) and find out what people think of your brand or topic by mining the text for clues about positive or negative sentiment.
+Use [sentiment analysis](how-tos/text-analytics-how-to-sentiment-analysis.md) (SA) and find out what people think of your brand or topic by mining the text for clues about positive or negative sentiment.
The feature provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This feature also returns confidence scores between 0 and 1 for each document & sentences within it for positive, neutral and negative sentiment. You can also be run the service on premises [using a container](how-tos/text-analytics-how-to-install-containers.md).
-Starting in the v3.1 preview, opinion mining is a feature of Sentiment Analysis. Also known as Aspect-based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to words (such as the attributes of products or services) in text.
+Starting in the v3.1, opinion mining (OM) is a feature of Sentiment Analysis. Also known as Aspect-based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to words (such as the attributes of products or services) in text.
## Key phrase extraction
-Use [key phrase extraction](how-tos/text-analytics-how-to-keyword-extraction.md) to quickly identify the main concepts in text. For example, in the text "The food was delicious and there were wonderful staff", Key Phrase Extraction will return the main talking points: "food" and "wonderful staff".
+Use [key phrase extraction](how-tos/text-analytics-how-to-keyword-extraction.md) (KPE) to quickly identify the main concepts in text. For example, in the text "The food was delicious and there were wonderful staff", Key Phrase Extraction will return the main talking points: "food" and "wonderful staff".
## Language detection
Language detection can [detect the language an input text is written in](how-tos
Named Entity Recognition (NER) can [Identify and categorize entities](how-tos/text-analytics-how-to-entity-linking.md) in your text as people, places, organizations, quantities, Well-known entities are also recognized and linked to more information on the web.
+## Text Analytics for health
+
+Text Analytics for health is a feature of the Text Analytics API service that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
+ ## Deploy on premises using Docker containers [Use Text Analytics containers](how-tos/text-analytics-how-to-install-containers.md) to deploy API features on-premises. These docker containers enable you to bring the service closer to your data for compliance, security or other operational reasons. Text Analytics offers the following containers:
Named Entity Recognition (NER) can [Identify and categorize entities](how-tos/te
* sentiment analysis * key phrase extraction (preview) * language detection (preview)
-* Text Analytics for health (preview)
+* Text Analytics for health
## Asynchronous operations
-The `/analyze` endpoint enables you to use select features of the Text Analytics API [asynchronously](how-tos/text-analytics-how-to-call-api.md), such as NER and key phrase extraction.
+The `/analyze` endpoint enables you to use many features of the Text Analytics API [asynchronously](how-tos/text-analytics-how-to-call-api.md). Named Entity Recognition (NER), Key phrase extraction (KPE), Sentiment Analysis (SA), Opinion Mining (OM) are available as part of `/analyze` endpoint. It allows clubbing of these features in a single call. It allows sending up to 125,000 characters per document. Pricing is same as regular Text Analytics.
## Typical workflow
cognitive-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md
Previously updated : 06/11/2021 Last updated : 07/06/2021 keywords: text mining, sentiment analysis, text analytics
Use this article to get started with the Text Analytics client library and REST
::: zone pivot="programming-language-java" > [!IMPORTANT]
-> * The latest stable version of the Text Analytics API is `3.0`.
+> * The latest stable version of the Text Analytics API is `3.1`.
> * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below. If you want to use Text Analytics for health or Asynchronous operations, see the examples on Github for [C#](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics), [Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/) or [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics)
If you want to use Text Analytics for health or Asynchronous operations, see the
::: zone pivot="programming-language-javascript" > [!IMPORTANT]
-> * The latest stable version of the Text Analytics API is `3.0`.
+> * The latest stable version of the Text Analytics API is `3.1`.
> * Be sure to only follow the instructions for the version you are using. > * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below. > * You can also run this version of the Text Analytics client library [in your browser](https://github.com/Azure/azure-sdk-for-js/blob/master/documentation/Bundling.md).
If you want to use Text Analytics for health or Asynchronous operations, see the
::: zone pivot="programming-language-python" > [!IMPORTANT]
-> * The latest stable version of the Text Analytics API is `3.0`.
+> * The latest stable version of the Text Analytics API is `3.1`.
> * Be sure to only follow the instructions for the version you are using. > * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below. If you want to use Text Analytics for health or Asynchronous operations, see the examples on Github for [C#](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics), [Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/textanalytics/azure-ai-textanalytics/) or [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/textanalytics/azure-ai-textanalytics)
If you want to use Text Analytics for health or Asynchronous operations, see the
::: zone pivot="rest-api" > [!IMPORTANT]
-> * The latest stable version of the Text Analytics API is `3.0`.
+> * The latest stable version of the Text Analytics API is `3.1`.
> * Be sure to only follow the instructions for the version you are using. [!INCLUDE [REST API quickstart](../includes/quickstarts/rest-api.md)]
cognitive-services Tutorial Power Bi Key Phrases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/tutorials/tutorial-power-bi-key-phrases.md
You might also consider filtering out blank messages using the Remove Empty filt
## Understand the API <a name="UnderstandingAPI"></a>
-The [Key Phrases API](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-V3-0/operations/KeyPhrases) of the Text Analytics service can process up to a thousand text documents per HTTP request. Power BI prefers to deal with records one at a time, so in this tutorial your calls to the API will include only a single document each. The Key Phrases API requires the following fields for each document being processed.
+The [Key Phrases API](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-V3-1/operations/KeyPhrases) of the Text Analytics service can process up to a thousand text documents per HTTP request. Power BI prefers to deal with records one at a time, so in this tutorial your calls to the API will include only a single document each. The Key Phrases API requires the following fields for each document being processed.
| Field | Description | | - | - |
The Sentiment Analysis function below returns a label indicating how positive th
// Returns the sentiment label of the text, for example, positive, negative or mixed. (text) => let apikey = "YOUR_API_KEY_HERE",
- endpoint = "<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1-preview.5/sentiment",
+ endpoint = "<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/sentiment",
jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))), jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }", bytesbody = Text.ToBinary(jsonbody),
Here are two versions of a Language Detection function. The first returns the IS
// Returns the two-letter language code (for example, 'en' for English) of the text (text) => let apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.0/languages",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/languages",
jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))), jsonbody = "{ documents: [ { id: ""0"", text: " & jsontext & " } ] }", bytesbody = Text.ToBinary(jsonbody),
Here are two versions of a Language Detection function. The first returns the IS
// Returns the name (for example, 'English') of the language in which the text is written (text) => let apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.0/languages",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/languages",
jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))), jsonbody = "{ documents: [ { id: ""0"", text: " & jsontext & " } ] }", bytesbody = Text.ToBinary(jsonbody),
Finally, here's a variant of the Key Phrases function already presented that ret
// Returns key phrases from the text as a list object (text) => let apikey = "YOUR_API_KEY_HERE",
- endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.0/keyPhrases",
+ endpoint = "https://<your-custom-subdomain>.cognitiveservices.azure.com" & "/text/analytics/v3.1/keyPhrases",
jsontext = Text.FromBinary(Json.FromValue(Text.Start(Text.Trim(text), 5000))), jsonbody = "{ documents: [ { language: ""en"", id: ""0"", text: " & jsontext & " } ] }", bytesbody = Text.ToBinary(jsonbody),
in keyphrases
Learn more about the Text Analytics service, the Power Query M formula language, or Power BI. > [!div class="nextstepaction"]
-> [Text Analytics API reference](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0)
+> [Text Analytics API reference](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1)
> [!div class="nextstepaction"] > [Power Query M reference](/powerquery-m/power-query-m-reference)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
Previously updated : 06/17/2021 Last updated : 07/07/2021
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## July 2021
+
+### GA release updates
+
+* General availability for Text Analytics for health for both containers and hosted API (/health).
+* General availability for Opinion Mining.
+* General availability for PII extraction and redaction.
+* General availability for Asynchronous (`/analyze`) endpoint.
+* Updated [quickstart](quickstarts/client-libraries-rest-api.md) examples.
+ ## June 2021 ### General API updates
-* New model-version `2021-06-01` for key phrase extraction, which adds support for simplified Chinese.
+* New model-version `2021-06-01` for key phrase extraction based on transformers. It provides:
+ * Support for 10 languages (Latin and CJK).
+ * Improved key phrase extraction.
* The `2021-06-01` model version for [Named Entity Recognition](how-tos/text-analytics-how-to-entity-linking.md) v3.x, which provides * Improved AI quality and expanded language support for the *Skill* entity category. * Added Spanish, French, German, Italian and Portuguese language support for the *Skill* entity category
-* Asynchronous operation and Text Analytics for health are available in all regions
+* Asynchronous (/analyze) operation and Text Analytics for health (ungated preview) is available in all regions.
### Text Analytics for health updates
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
Title: Teams meeting interoperability
+ Title: Teams meeting interoperability
description: Join Teams meetings
# Teams interoperability - > [!IMPORTANT]
-> To enable/disable [Teams tenant interoperability](../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
+> BYOI interoperability is in public preview and broadly available on request. To enable/disable [Teams tenant interoperability](../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
+>
+> Microsoft 365 authenticated interoperability is in private preview, and restricted using service controls to Azure Communication Services early adopters. To enable/disable the custom Teams endpoint experience, complete [this form](https://forms.office.com/r/B8p5KqCH19).
+>
+> Preview APIs and SDKs are provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Communication Services can be used to build custom applications that interact with Microsoft Teams. End users of your Communication Services application can interact with Teams participants over voice, video, chat, and screen sharing.
-> [!NOTE]
-> Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting. Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact, in real time, to your users within your applicationΓÇÖs user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
+Azure Communication Services supports two types of Teams interoperability depending on the identity of the end user:
-> [!NOTE]
-> VoIP and Chat usage is only billed to your Azure resource when using Azure APIs and SDKs. Teams clients interacting with Azure Communication Services applications are free.
+- **Bring your own identity.** You control user authentication and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses to join Teams meetings. Teams treats your application as anonymous external user.
+- **Microsoft 365 Teams identity.** Your application acts on behalf of an end user's Microsoft 365 identity and their Teams configured resources. These authenticated applications can make calls and join meetings seamlessly on behalf of Microsoft 365 users.
-Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.
+Applications can implement both authentication schemes and leave the choice of authentication up to the end user.
-Teams interoperability allows you to create custom applications that connect users to Teams meetings. Users of your custom applications don't need to have Azure Active Directory identities or Teams licenses to experience this capability. This is ideal for bringing employees (who may be familiar with Teams) and external users (using a custom application experience) together into a seamless meeting experience. For example:
+## Bring your own identity
+Bring your own identity (BYOI) is the most common and simplest model for using Azure Communication Services and Teams interoperability. You implement whatever authentication scheme you desire, your app can join Microsoft Teams meetings, and Teams will treat these users as anonymous external accounts.
-1. Employees use Teams to schedule a meeting
+This capability is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application experience) into a meeting experience. For example:
+
+1. Employees use Teams to schedule a meeting
1. Meeting details are shared with external users through your custom application.
- * **Using Graph API** Your custom Communication Services application uses the Microsoft Graph APIs to access meeting details to be shared.
- * **Using other options** For example, your meeting link can be copied from your calendar in Microsoft Teams.
+ * **Using Graph API** - Your custom application uses the Microsoft Graph APIs to access meeting details to be shared.
+ * **Manual options** - For example, your meeting link can be copied from your calendar in Microsoft Teams.
1. External users use your custom application to join the Teams meeting (via the Communication Services Calling and Chat SDKs)
-The high-level architecture for this use-case looks like this:
+While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call. If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages.
+
+When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams.
-![Architecture for Teams interop](./media/call-flows/teams-interop.png)
+Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
-Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the [meeting settings](/microsoftteams/meeting-settings-in-teams).
+Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a Web application.
-While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call. If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages.
+## Microsoft 365 Teams identity
+Authenticating the end user's Microsoft 365 account and authorizing your application through Azure Active Directory allows for a deeper level of interoperability with Microsoft Teams. These applications can make calls and join meetings seamlessly on behalf of Microsoft 365 users. When interacting in a meeting or call, users of the native Teams app will observe your application's end users having the appropriate display name, profile picture, call history, and other Microsoft 365 attributes.
+
+This identity model is ideal for augmenting a Teams deployment with a fully custom user experience. For example, an application can be used to answer phone calls on behalf of the end user's Teams provisioned PSTN number and have a user interface optimized for a receptionist or call center business process.
+
+Building an Azure Communication Services app that Microsoft 365 resources requires:
+1. Authentication of the end user's Microsoft 365 credentials
+2. Authorization from the end user
+3. Application authorization from the end user's Azure Active Directory tenant
-When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
+Authentication and authorization of the end user is through [Microsoft Authentication Library flows (MSAL)](https://docs.microsoft.com/azure/active-directory/develop/msal-overview). The following diagram summarizes integrating your calling experiences with authenticated Teams interoperability:
+
+![Process to enable calling feature for custom Teams endpoint experience](./media/teams-identities/teams-identity-calling-overview.png)
++
+## Privacy
+Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
+
+Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact, in real time, to your users within your applicationΓÇÖs user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation.
+
+## Pricing
+All usage of Azure Communication Service APIs and SDKs increments [Azure Communication Service billing meters](https://azure.microsoft.com/pricing/details/communication-services/). Interactions with Microsoft Teams, such as joining a meeting or initiating a phone call using a Teams allocated number, will increment these meters but there is no additional fee for the Teams interoperability capability itself, and there is no pricing distinction between the BYOI and Microsoft 365 authentication options.
+
+If your Azure application has an end user spend 10 minutes in a meeting with a user of Microsoft Teams, those two users combined consumed 20 calling minutes. The 10 minutes exercised through the custom application and using Azure APIs and SDKs will be billed to your resource. However the 10 minutes consumed by the end user in the native Teams application is covered by the applicable Teams license and is not metered by Azure.
## Teams in Government Clouds (GCC)
-Azure Communication Services interoperability isn't compatible with Teams deployments using [Microsoft 365 government clouds (GCC)](/MicrosoftTeams/plan-for-government-gcc) at this time.
+Azure Communication Services interoperability isn't compatible with Teams deployments using [Microsoft 365 government clouds (GCC)](/MicrosoftTeams/plan-for-government-gcc) at this time.
## Next steps > [!div class="nextstepaction"]
-> [Join your calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
-
-For more information, see the following articles:
--- Learn about [UI Library](./ui-library/ui-library-overview.md)-- Learn about [UI Library capabilities](./ui-library/ui-library-use-cases.md)
+> [Join a BYOI calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
+> [Authenticate Microsoft 365 users](../quickstarts/manage-teams-identity.md)
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
Features of confidential computing nodes include:
- Linux worker nodes supporting Linux containers. - Generation 2 virtual machine (VM) with Ubuntu 18.04 VM nodes.-- Intel SGX capable CPU to help run your containers in confidentiality protected enclave leveraging Encrypted Page Cache Memory (EPC). For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).-- Intel SGX DCAP Driver preinstalled on the confidential computing nodes. For more information, see [Frequently asked questions for Azure confidential computing](./faq.md).
+- Intel SGX capable CPU to help run your containers in confidentiality protected enclave leveraging Encrypted Page Cache Memory (EPC). For more information, see [Frequently asked questions for Azure confidential computing](./faq.yml).
+- Intel SGX DCAP Driver preinstalled on the confidential computing nodes. For more information, see [Frequently asked questions for Azure confidential computing](./faq.yml).
> [!NOTE] > DCsv2 VMs use specialized hardware that's subject to higher pricing and region availability. For more information, see the [available SKUs and supported regions](virtual-machine-solutions.md).
confidential-computing Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/faq.md
- Title: Azure Confidential Computing FAQ
-description: Answers to frequently asked questions about Azure confidential computing.
----- Previously updated : 4/17/2020---
-# Frequently asked questions for Azure Confidential Computing
-
-This article provides answers to some of the most common questions about running [confidential computing workloads on Azure virtual machines](overview.md).
-
-If your Azure issue is not addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You can also submit an Azure support request. To submit a support request, on the [Azure support page](https://azure.microsoft.com/support/options/), select Get support.
-
-## Confidential Computing Virtual Machines <a id="vm-faq"></a>
-
-**How can I deploy DCsv2 series VMs on Azure?**
-
-Here are some ways you can deploy a DCsv2 VM:
- - Using an [Azure Resource Manager Template](../virtual-machines/windows/template-description.md)
- - From the [Azure portal](https://portal.azure.com/#create/hub)
- - In the [Azure Confidential Computing (Virtual Machine)](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-azure-compute.acc-virtual-machine-v2?tab=overview) marketplace solution template. The marketplace solution template will help constrain a customer to the supported scenarios (regions, images, availability, disk encryption).
-
-**Will all OS images work with Azure confidential computing?**
-
-No. The virtual machines can only be deployed on Generation 2 operating machines with Ubuntu Server 18.04, Ubuntu Server 20.04, Windows Server 2019 Datacenter, and Windows Server 2016 Datacenter. Read more about Gen 2 VMs on [Linux](../virtual-machines/generation-2.md) and [Windows](../virtual-machines/generation-2.md)
-
-**DCsv2 virtual machines are grayed out in the portal and I can't select one**
-
-Based on the information bubble next to the VM, there are different actions to take:
- - **UnsupportedGeneration**: Change the generation of the virtual machine image to ΓÇ£Gen2ΓÇ¥.
- - **NotAvailableForSubscription**: The region isn't yet available for your subscription. Select an available region.
- - **InsufficientQuota**: [Create a support request to increase your quota](../azure-portal/supportability/per-vm-quota-requests.md). Free trial subscriptions don't have quota for confidential computing VMs.
-
-**DCsv2 virtual machines don't show up when I try to search for them in the portal size selector**
-
-Make sure you've selected an [available region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines). Also make sure you select ΓÇ£clear all filtersΓÇ¥ in the size selector.
-
-**Can I enable Accelerated Networking with Azure confidential computing?**
-
- No. Accelerated Networking isn't supported on DC-Series or DCsv2-Series virtual machines. Accelerated Networking cannot be enabled for any confidential computing virtual machine deployment or Azure Kubernetes Service cluster deployment running on confidential computing.
-
-**Can I use Azure Dedicated Host with these machines?**
-
-Yes. Azure Dedicated Host support DCsv2-series virtual machines. Azure Dedicated Host provides a single-tenant physical server to run your virtual machines on. Users usually use Azure Dedicated Host to address compliance requirements around physical security, data integrity, and monitoring.
-
-**I get an Azure Resource Manager template deployment failure error: "Operation could not be completed as it results in exceeding approved standard DcsV2 Family Cores Quota"**
-
-[Create a support request to increase your quota](../azure-portal/supportability/per-vm-quota-requests.md). Free trial subscriptions don't have quota for confidential computing VMs.
-
-**WhatΓÇÖs the difference between DCsv2-Series and DC-Series VMs?**
-
-DC-Series VMs run on older 6-core Intel Processors with Intel SGX and have less total memory, less Enclave Page Cache (EPC) memory, and are available in only two regions (US East and Europe West in Standard_DC2s and Standard_DC4s sizes). There are no plans to make these VMs Generally Available and they are not recommended for production use. To deploy these VMs, use the [Confidential Compute DC-Series VM [Preview]](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-azure-compute.confidentialcompute?tab=Overview) Marketplace instance.
-
-**Are DCsv2 virtual machines available globally?**
-
-No. At this time, these virtual machines are only available in select regions. Check the [products by regions page](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) for the latest available regions.
-
-**Is hyper-threading OFF on these machines?**
-
-Hyper-threading is disabled for all Azure confidential computing clusters.
-
-**How do I install the Open Enclave SDK on the DCsv2 virtual machines?**
-
-For instructions on how to install the OE SDK on an Azure or on-premise Machine, follow the instructions on the [Open Enclave SDK GitHub](https://github.com/openenclave/openenclave).
-
-You can also look into the Open Enclave SDK GitHub for OS-specific installation instructions:
- - [Install the OE SDK on Windows](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Windows.md)
- - [Install the OE SDK on Ubuntu 18.04](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Ubuntu_18.04.md)
container-registry Allow Access Trusted Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/allow-access-trusted-services.md
Title: Access network-restricted registry using trusted Azure service description: Enable a trusted Azure service instance to securely access a network-restricted container registry to pull or push images Previously updated : 01/29/2021 Last updated : 05/19/2021 # Allow trusted services to securely access a network-restricted container registry (preview)
-Azure Container Registry can allow select trusted Azure services to access a registry that's configured with network access rules. When trusted services are allowed, a trusted service instance can securely bypass the registry's network rules and perform operations such as pull or push images. The service instance's managed identity is used for access, and must be assigned an Azure role and authenticate with the registry.
+Azure Container Registry can allow select trusted Azure services to access a registry that's configured with network access rules. When trusted services are allowed, a trusted service instance can securely bypass the registry's network rules and perform operations such as pull or push images. This article explains how to enable and use trusted services with a network-restricted Azure container registry.
Use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.18 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
Allowing registry access by trusted Azure services is a **preview** feature.
## Limitations
-* You must use a system-assigned managed identity enabled in a [trusted service](#trusted-services) to access a network-restricted container registry. User-assigned managed identities aren't currently supported.
+* For registry access scenarios that need a managed identity, only a system-assigned identity may be used. User-assigned managed identities aren't currently supported.
* Allowing trusted services doesn't apply to a container registry configured with a [service endpoint](container-registry-vnet.md). The feature only affects registries that are restricted with a [private endpoint](container-registry-private-link.md) or that have [public IP access rules](container-registry-access-selected-networks.md) applied. ## About trusted services
Azure Container Registry has a layered security model, supporting multiple netwo
* [Private endpoint with Azure Private Link](container-registry-private-link.md). When configured, a registry's private endpoint is accessible only to resources within the virtual network, using private IP addresses. * [Registry firewall rules](container-registry-access-selected-networks.md), which allow access to the registry's public endpoint only from specific public IP addresses or address ranges. You can also configure the firewall to block all access to the public endpoint when using private endpoints.
-When deployed in a virtual network or configured with firewall rules, a registry denies access by default to users or services from outside those sources.
+When deployed in a virtual network or configured with firewall rules, a registry denies access to users or services from outside those sources.
-Several multi-tenant Azure services operate from networks that can't be included in these registry network settings, preventing them from pulling or pushing images to the registry. By designating certain service instances as "trusted", a registry owner can allow select Azure resources to securely bypass the registry's network settings to pull or push images.
+Several multi-tenant Azure services operate from networks that can't be included in these registry network settings, preventing them from performing operations such as pull or push images to the registry. By designating certain service instances as "trusted", a registry owner can allow select Azure resources to securely bypass the registry's network settings to perform registry operations.
### Trusted services Instances of the following services can access a network-restricted container registry if the registry's **allow trusted services** setting is enabled (the default). More services will be added over time.
-|Trusted service |Supported usage scenarios |
-|||
-|ACR Tasks | [Access a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) |
-|Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-docker-image.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image |
-|Azure Container Registry | [Import images from another Azure container registry](container-registry-import-images.md#import-from-an-azure-container-registry-in-the-same-ad-tenant) |
+Where indicated, access by the trusted service requires additional configuration of a managed identity in a service instance, assignment of an [RBAC role](container-registry-roles.md), and authentication with the registry. For example steps, see [Trusted services workflow](#trusted-services-workflow), later in this article.
+
+|Trusted service |Supported usage scenarios | Configure managed identity with RBAC role
+||||
+| Azure Security Center | Vulnerability scanning by [Azure Defender for container registries](scan-images-defender.md) | No |
+|ACR Tasks | [Access a different registry from an ACR Task](container-registry-tasks-cross-registry-authentication.md) | Yes |
+|Machine Learning | [Deploy](../machine-learning/how-to-deploy-custom-docker-image.md) or [train](../machine-learning/how-to-train-with-custom-image.md) a model in a Machine Learning workspace using a custom Docker container image | Yes |
+|Azure Container Registry | [Import images from another Azure container registry](container-registry-import-images.md#import-from-an-azure-container-registry-in-the-same-ad-tenant) | No |
> [!NOTE]
-> Currently, enabling the allow trusted services setting does not allow instances of other managed Azure services including App Service, Azure Container Instances, and Azure Security Center to access a network-restricted container registry.
+> Curently, enabling the allow trusted services setting doesn't apply to certain other managed Azure services including App Service and Azure Container Instances.
## Allow trusted services - CLI
To disable or re-enable the setting in the portal:
## Trusted services workflow
-Here's a typical workflow to enable an instance of a trusted service to access a network-restricted container registry.
+Here's a typical workflow to enable an instance of a trusted service to access a network-restricted container registry. This workflow is needed when a service instance's managed identity is used to bypass the registry's network rules.
1. Enable a system-assigned [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) in an instance of one of the [trusted services](#trusted-services) for Azure Container Registry. 1. Assign the identity an [Azure role](container-registry-roles.md) to your registry. For example, assign the ACRPull role to pull container images.
container-registry Scan Images Defender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/scan-images-defender.md
+
+ Title: Scan registry images with Azure Defender
+description: Learn about using Azure Defender for container registries to scan images in your Azure container registries
+ Last updated : 05/19/2021++
+# Scan registry images with Azure Defender
+
+To scan images in your Azure container registries for vulnerabilities, you can integrate one of the available Azure Marketplace solutions or, if you want to use Azure Security Center, optionally enable **Azure Defender for container registries** at the subscription level.
+
+* Learn more about [Azure Defender for container registries](../security-center/defender-for-container-registries-introduction.md)
+* Learn more about [container security in Azure Security Center](../security-center/container-security.md)
+
+## Registry operations by Azure Defender
+
+Azure Defender scans images that are pushed to a registry, imported into a registry, or any images pulled within the last 30 days. If vulnerabilities are detected, [recommended remediations](../security-center/defender-for-container-registries-usage.md#view-and-remediate-findings) appear in Azure Security Center.
+
+ After you've taken the recommended steps to remediate the security issue, replace the image in your registry. Azure Defender rescans the image to confirm that the vulnerabilities are remediated.
+
+For details, see [Use Azure Defender for container registries](../security-center/defender-for-container-registries-usage.md).
+
+> [!TIP]
+> Azure Defender authenticates with the registry to pull images for vulnerability scanning. If [resource logs](monitor-service-reference.md#resource-logs) are collected for your registry, you'll see registry login events and image pull events generated by Azure Defender. These events are associated with an alphanumeric ID such as `b21cb118-5a59-4628-bab0-3c3f0e434cg6`.
+
+## Scanning a network-restricted registry
+
+Azure Defender can scan images in a publicly accessible container registry or one that's protected with network access rules. If network rules are configured (that is, you disable public registry access, configure IP access rules, or create private endpoints), be sure to enable the network setting to [**allow trusted Microsoft services**](allow-access-trusted-services.md) to access the registry. By default, this setting is enabled in a new container registry.
+
+## Next steps
+
+* Learn more about registry access by [trusted services](allow-access-trusted-services.md).
+* To restrict access to a registry using a private endpoint in a virtual network, see [Configure Azure Private Link for an Azure container registry](container-registry-private-link.md).
+* To set up registry firewall rules, see [Configure public IP network rules](container-registry-access-selected-networks.md).
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/best-practice-dotnet.md
Previously updated : 06/02/2021 Last updated : 07/08/2021
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
| <input type="checkbox" unchecked /> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).| | <input type="checkbox" unchecked /> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. | | <input type="checkbox" unchecked /> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. |
+| <input type="checkbox" unchecked /> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. |
| <input type="checkbox" unchecked /> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) | | <input type="checkbox" unchecked /> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) | | <input type="checkbox" unchecked /> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
cosmos-db Change Feed Pull Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed-pull-model.md
ms.devlang: dotnet Previously updated : 06/04/2021 Last updated : 07/08/2021
Here's some key differences between the change feed processor and pull model:
| Keeping track of current point in processing change feed | Lease (stored in an Azure Cosmos DB container) | Continuation token (stored in memory or manually persisted) | | Ability to replay past changes | Yes, with push model | Yes, with pull model| | Polling for future changes | Automatically checks for changes based on user-specified `WithPollInterval` | Manual |
-| Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must catch exception and manually recheck |
+| Behavior where there are no new changes | Automatically wait `WithPollInterval` and recheck | Must check status and manually recheck |
| Process changes from entire container | Yes, and automatically parallelized across multiple threads/machine consuming from the same container| Yes, and manually parallelized using FeedRange | | Process changes from just a single partition key | Not supported | Yes|
Here's some key differences between the change feed processor and pull model:
## Consuming an entire container's changes
-You can create a `FeedIterator` to process the change feed using the pull model. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value which consists of both the starting position for reading changes as well as the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed using that specific `FeedIterator`.
+You can create a `FeedIterator` to process the change feed using the pull model. When you initially create a `FeedIterator`, you must specify a required `ChangeFeedStartFrom` value, which consists of both the starting position for reading changes and the desired `FeedRange`. The `FeedRange` is a range of partition key values and specifies the items that will be read from the change feed using that specific `FeedIterator`.
You can optionally specify `ChangeFeedRequestOptions` to set a `PageSizeHint`. The `PageSizeHint` is the maximum number of items that will be returned in a single page.
Here's an example for obtaining a `FeedIterator` that returns a `Stream`:
FeedIterator iteratorWithStreams = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental); ```
-If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example which starts reading all changes starting at the current time:
+If you don't supply a `FeedRange` to a `FeedIterator`, you can process an entire container's change feed at your own pace. Here's an example, which starts reading all changes starting at the current time:
```csharp FeedIterator iteratorForTheEntireContainer = container.GetChangeFeedStreamIterator<User>(ChangeFeedStartFrom.Now(), ChangeFeedMode.Incremental); while (iteratorForTheEntireContainer.HasMoreResults) {
- FeedResponse<User> users = await iteratorForTheEntireContainer.ReadNextAsync();
+ FeedResponse<User> response = await iteratorForTheEntireContainer.ReadNextAsync();
- if (users.Status == HttpStatusCode.NotModified)
+ if (response.StatusCode == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); } else {
- foreach (User user in users)
+ foreach (User user in response)
{ Console.WriteLine($"Detected change for user with id {user.id}"); }
while (iteratorForTheEntireContainer.HasMoreResults)
} ```
-Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `HasMoreResults` is always true. When you try to read the change feed and there are no new changes available, you'll receive an exception. In the above example, the exception is handled by waiting 5 seconds before rechecking for changes.
+Because the change feed is effectively an infinite list of items encompassing all future writes and updates, the value of `HasMoreResults` is always true. When you try to read the change feed and there are no new changes available, you'll receive a response with `NotModified` status. In the above example, it is handled by waiting 5 seconds before rechecking for changes.
## Consuming a partition key's changes
FeedIterator<User> iteratorForPartitionKey = container.GetChangeFeedIterator<Use
while (iteratorForThePartitionKey.HasMoreResults) {
- FeedResponse<User> users = await iteratorForThePartitionKey.ReadNextAsync();
+ FeedResponse<User> response = await iteratorForThePartitionKey.ReadNextAsync();
- if (users.Status == HttpStatusCode.NotModified)
+ if (response.StatusCode == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); } else {
- foreach (User user in users)
+ foreach (User user in response)
{ Console.WriteLine($"Detected change for user with id {user.id}"); }
IReadOnlyList<FeedRange> ranges = await container.GetFeedRangesAsync();
When you obtain of list of FeedRanges for your container, you'll get one `FeedRange` per [physical partition](partitioning-overview.md#physical-partitions).
-Using a `FeedRange`, you can then create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators which can process the change feed in parallel.
+Using a `FeedRange`, you can then create a `FeedIterator` to parallelize the processing of the change feed across multiple machines or threads. Unlike the previous example that showed how to obtain a `FeedIterator` for the entire container or a single partition key, you can use FeedRanges to obtain multiple FeedIterators, which can process the change feed in parallel.
In the case where you want to use FeedRanges, you need to have an orchestrator process that obtains FeedRanges and distributes them to those machines. This distribution could be:
Machine 1:
FeedIterator<User> iteratorA = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[0]), ChangeFeedMode.Incremental); while (iteratorA.HasMoreResults) {
- FeedResponse<User> users = await iteratorA.ReadNextAsync();
+ FeedResponse<User> response = await iteratorA.ReadNextAsync();
- if (users.Status == HttpStatusCode.NotModified)
+ if (response.StatusCode == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); } else {
- foreach (User user in users)
+ foreach (User user in response)
{ Console.WriteLine($"Detected change for user with id {user.id}"); }
Machine 2:
FeedIterator<User> iteratorB = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(ranges[1]), ChangeFeedMode.Incremental); while (iteratorB.HasMoreResults) {
- FeedResponse<User> users = await iteratorA.ReadNextAsync();
+ FeedResponse<User> response = await iteratorA.ReadNextAsync();
- if (users.Status == HttpStatusCode.NotModified)
+ if (response.StatusCode == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes"); await Task.Delay(TimeSpan.FromSeconds(5)); } else {
- foreach (User user in users)
+ foreach (User user in response)
{ Console.WriteLine($"Detected change for user with id {user.id}"); }
while (iteratorB.HasMoreResults)
## Saving continuation tokens
-You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes. This allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
+You can save the position of your `FeedIterator` by obtaining the continuation token. A continuation token is a string value that keeps of track of your FeedIterator's last processed changes and allows the `FeedIterator` to resume at this point later. The following code will read through the change feed since container creation. After no more changes are available, it will persist a continuation token so that change feed consumption can be later resumed.
```csharp FeedIterator<User> iterator = container.GetChangeFeedIterator<User>(ChangeFeedStartFrom.Beginning(), ChangeFeedMode.Incremental);
string continuation = null;
while (iterator.HasMoreResults) {
- FeedResponse<User> users = await iterator.ReadNextAsync();
+ FeedResponse<User> response = await iterator.ReadNextAsync();
- if (users.Status == HttpStatusCode.NotModified)
+ if (response.StatusCode == HttpStatusCode.NotModified)
{ Console.WriteLine($"No new changes");
- continuation = users.ContinuationToken;
+ continuation = response.ContinuationToken;
// Stop the consumption since there are no new changes break; } else {
- foreach (User user in users)
+ foreach (User user in response)
{ Console.WriteLine($"Detected change for user with id {user.id}"); }
cosmos-db How To Configure Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-integrated-cache.md
This article describes how to provision a dedicated gateway, configure the integ
3. If you're using the .NET or Java SDK, set the connection mode to [gateway mode](sql-sdk-connection-modes.md#available-connectivity-modes). This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of connecting besides gateway mode.
+> [!NOTE]
+> If you are using the latest .NET or Java SDK version, the default connection mode is direct mode. In order to use the integrated cache, you must override this default.
+
+If you're using the Java SDK, you must also manually set [contentResponseOnWriteEnabled](https://docs.microsoft.com/java/api/com.azure.cosmos.cosmosclientbuilder.contentresponseonwriteenabled?view=azure-java-stable) to `true` within the `CosmosClientBuilder`. If you're using any other SDK, this value already defaults to `true`, so you don't need to make any changes.
+ ## Adjust request consistency You must adjust the request consistency to eventual. If not, the request will always bypass the integrated cache. The easiest way to configure eventual consistency for all read operations is to [set it at the account-level](consistency-levels.md#configure-the-default-consistency-level). You can also configure consistency at the [request-level](how-to-manage-consistency.md#override-the-default-consistency-level), which is recommended if you only want a subset of your reads to utilize the integrated cache.
FeedIterator<Food> myQuery = container.GetItemQueryIterator<Food>(new QueryDefin
``` > [!NOTE]
-> Currently, you can only adjust the MaxIntegratedCacheStaleness using the latest .NET and Java preview SDK's.
+> Currently, you can only adjust the MaxIntegratedCacheStaleness using the latest [.NET](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.17.0-preview) and [Java](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.16.0-beta.1) preview SDK's.
## Verify cache hits
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/performance-tips-dotnet-sdk-v3-sql.md
Previously updated : 10/13/2020 Last updated : 07/08/2021
Each `CosmosClient` instance is thread-safe and performs efficient connection ma
When you're working on Azure Functions, instances should also follow the existing [guidelines](../azure-functions/manage-connections.md#static-clients) and maintain a single instance.
-<a id="max-connection"></a>
+**Avoid blocking calls**
+
+Cosmos DB SDK should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
+
+A common performance problem in apps using the Cosmos DB SDK is blocking calls that could be asynchronous. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times.
+
+**Do not**:
+
+* Block asynchronous execution by calling [Task.Wait](/dotnet/api/system.threading.tasks.task.wait) or [Task.Result](/dotnet/api/system.threading.tasks.task-1.result).
+* Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous.
+* Acquire locks in common code paths. Cosmos DB .NET SDK is most performant when architected to run code in parallel.
+* Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.
+* Do not use ToList() on `Container.GetItemLinqQueryable<T>()` which uses blocking calls to synchronously drain the query. Use [ToFeedIterator()](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/e2029f2f4854c0e4decd399c35e69ef799db9f35/Microsoft.Azure.Cosmos/src/Resource/Container/Container.cs#L1143) to drain the query asynchronously.
+
+**Do**:
+
+* Call the Cosmos DB .NET APIs asynchronously.
+* The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns.
+
+A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be used to find threads frequently added to the [Thread Pool](/windows/desktop/procthread/thread-pools). The `Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start` event indicates a thread added to the thread pool.
+ **Disable content response on write operations**
itemResponse.Resource
Enable *Bulk* for scenarios where the workload requires a large amount of throughput, and latency is not as important. For more information about how to enable the Bulk feature, and to learn which scenarios it should be used for, see [Introduction to Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
-**Increase System.Net MaxConnections per host when you use Gateway mode**
+<a id="max-connection"></a>**Increase System.Net MaxConnections per host when you use Gateway mode**
-Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [`Documents.Client.ConnectionPolicy.MaxConnectionLimit`](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.maxconnectionlimit) to a higher value.
+Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [`Documents.Client.ConnectionPolicy.MaxConnectionLimit`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.gatewaymodemaxconnectionlimit) to a higher value.
**Tune parallel queries for partitioned collections**
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/performance-tips.md
Previously updated : 10/13/2020 Last updated : 07/08/2021
The Azure Cosmos DB SDKs are constantly being improved to provide the best perfo
Each `DocumentClient` instance is thread-safe and performs efficient connection management and address caching when operating in direct mode. To allow efficient connection management and better SDK client performance, we recommend that you use a single instance per `AppDomain` for the lifetime of the application.
+**Avoid blocking calls**
+
+Cosmos DB SDK should be designed to process many requests simultaneously. Asynchronous APIs allow a small pool of threads to handle thousands of concurrent requests by not waiting on blocking calls. Rather than waiting on a long-running synchronous task to complete, the thread can work on another request.
+
+A common performance problem in apps using the Cosmos DB SDK is blocking calls that could be asynchronous. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times.
+
+**Do not**:
+
+* Block asynchronous execution by calling [Task.Wait](/dotnet/api/system.threading.tasks.task.wait) or [Task.Result](/dotnet/api/system.threading.tasks.task-1.result).
+* Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous.
+* Acquire locks in common code paths. Cosmos DB .NET SDK is most performant when architected to run code in parallel.
+* Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.
+* Do not use ToList() on `DocumentClient.CreateDocumentQuery(...)` which uses blocking calls to synchronously drain the query. Use [AsDocumentQuery()](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/a4348f8cc0750434376b02ae64ca24237da28cd7/samples/code-samples/Queries/Program.cs#L690) to drain the query asynchronously.
+
+**Do**:
+
+* Call the Cosmos DB .NET APIs asynchronously.
+* The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns.
+
+A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be used to find threads frequently added to the [Thread Pool](/windows/desktop/procthread/thread-pools). The `Microsoft-Windows-DotNETRuntime/ThreadPoolWorkerThread/Start` event indicates a thread added to the thread pool.
+ **Increase System.Net MaxConnections per host when using gateway mode** Azure Cosmos DB requests are made over HTTPS/REST when you use gateway mode. They're subjected to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (100 to 1,000) so the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [Documents.Client.ConnectionPolicy.MaxConnectionLimit](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.maxconnectionlimit) to a higher value.
cosmos-db Sql Query Linq To Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-linq-to-sql.md
The LINQ provider included with the SQL .NET SDK supports the following operator
- **String functions**: Supports translation from .NET `Concat`, `Contains`, `Count`, `EndsWith`,`IndexOf`, `Replace`, `Reverse`, `StartsWith`, `SubString`, `ToLower`, `ToUpper`, `TrimEnd`, and `TrimStart` to the equivalent [built-in string functions](sql-query-string-functions.md). - **Array functions**: Supports translation from .NET `Concat`, `Contains`, and `Count` to the equivalent [built-in array functions](sql-query-array-functions.md). - **Geospatial Extension functions**: Supports translation from stub methods `Distance`, `IsValid`, `IsValidDetailed`, and `Within` to the equivalent [built-in geospatial functions](sql-query-geospatial-query.md).-- **User-Defined Function Extension function**: Supports translation from the stub method `UserDefinedFunctionProvider.Invoke` to the corresponding [user-defined function](sql-query-udfs.md).
+- **User-Defined Function Extension function**: Supports translation from the stub method [CosmosLinq.InvokeUserDefinedFunction](/dotnet/api/microsoft.azure.cosmos.linq.cosmoslinq.invokeuserdefinedfunction?view=azure-dotnet&preserve-view=true) to the corresponding [user-defined function](sql-query-udfs.md).
- **Miscellaneous**: Supports translation of `Coalesce` and conditional [operators](sql-query-operators.md). Can translate `Contains` to String CONTAINS, ARRAY_CONTAINS, or IN, depending on context. ## Examples
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool.md
To start the Copy Data tool, click the **Ingest** tile on the home page of your
![Screenshot that shows the home page - link to Copy Data tool.](./media/doc-common-process/get-started-page.png)
+After you launch copy data tool, you will see two types of the tasks: one is **built-in copy task** and another is **metadata driven copy task**. The built-in copy task leads you to create a pipeline within five minutes to replicate data without learning about Azure Data Factory entities. The metadata driven copy task to ease your journey of creating parameterized pipelines and external control table in order to manage to copy large amounts of objects (for example, thousands of tables) at scale. You can see more details in [metadata driven copy data](copy-data-tool-metadata-driven.md).
## Intuitive flow for loading data into a data lake This tool allows you to easily move data from a wide variety of sources to destinations in minutes with an intuitive flow:
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 04/22/2021 Last updated : 07/08/2021 # Troubleshoot mapping data flows in Azure Data Factory
This article explores common troubleshooting methods for mapping data flows in A
- **Cause**: A large number of Data Flow activity runs are occurring concurrently on the integration runtime. For more information, see [Azure Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits). - **Recommendation**: If you want to run more Data Flow activities in parallel, distribute them across multiple integration runtimes.
+### Error code: 4510
+- **Message**: Unexpected failure during execution.
+- **Cause**: Since debug clusters work differently from job clusters, excessive debug runs could wear the cluster over time, which could cause memory issues and abrupt restarts.
+- **Recommendation**: Restart Debug cluster. If you are running multiple dataflows during debug session, use activity runs instead because activity level run creates separate session without taxing main debug cluster.
### Error code: InvalidTemplate - **Message**: The pipeline expression cannot be evaluated.
You may encounter the following issues before the improvement, but after the imp
Before the improvement, the default row delimiter `\n` may be unexpectedly used to parse delimited text files, because when Multiline setting is set to True, it invalidates the row delimiter setting, and the row delimiter is automatically detected based on the first 128 characters. If you fail to detect the actual row delimiter, it would fall back to `\n`.
- After the improvement, any one of the three row delimiters: `\r`, `\n`, `\r\n` should be worked.
+ After the improvement, any one of the three-row delimiters: `\r`, `\n`, `\r\n` should have worked.
The following example shows you one pipeline behavior change after the improvement:
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pricing-concepts.md
To accomplish the scenario, you need to create two pipelines with the following
| Create Pipeline | 6 Read/Write entities (2 for pipeline creation, 4 for dataset references) | | Get Pipeline | 2 Read/Write entity | | Run Pipeline | 6 Activity runs (2 for trigger run, 4 for activity runs) |
-| Execute Delete Activity: each execution time = 5 min. The Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC. The Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. |
+| Execute Delete Activity: each execution time = 5 min. The Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC. The Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. There is a 60 minutes Time To Live (TTL) for pipeline activity|
| Copy Data Assumption: each execution time = 10 min. The Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC. The Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) | | Monitor Pipeline Assumption: Only 2 runs occurred | 6 Monitoring run records retrieved (2 for pipeline run, 4 for activity run) |
-**Total Scenario pricing: $0.45523**
+**Total Scenario pricing: $1.45523**
- Data Factory Operations = $0.00023 - Read/Write = 20*00001 = $0.0002 [1 R/W = $0.50/50000 = 0.00001] - Monitoring = 6*000005 = $0.00003 [1 Monitoring = $0.25/50000 = 0.000005]-- Pipeline Orchestration & Execution = $0.455
+- Pipeline Orchestration & Execution = $1.455
- Activity Runs = 0.001*6 = 0.006 [1 run = $1/1000 = 0.001] - Data Movement Activities = $0.333 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - Pipeline Activity = $0.116 (Prorated for 7 minutes of execution time. $1/hour on Azure Integration Runtime)
+ - Pipeline Activity = $1.116 (Prorated for 7 minutes of execution time plus 60 minutes TTL. $1/hour on Azure Integration Runtime)
> [!NOTE] > These prices are for example purposes only.
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-tool.md
Previously updated : 07/06/2021 Last updated : 07/08/2021 # Copy data from Azure Blob storage to a SQL Database by using the Copy Data tool
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
You will install Azure PowerShell modules on your client that will work with you
If you used PowerShell core 7.0 and later, the example output below indicates that the Az version 1.10.0 modules were installed successfully. ```output
- <!-- this doesn't look correct. Neeraj to provide one for PS core-->
+
PS C:\windows\system32> Install-Module -Name Az.BootStrapper PS C:\windows\system32> Use-AzProfile -Profile 2020-09-01-hybrid -Force Loading Profile 2020-09-01-hybrid
databox-online Azure Stack Edge Gpu Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-prep.md
To create an Azure Stack Edge resource, take the following steps in the Azure po
![Create a resource 8](media/azure-stack-edge-gpu-deploy-prep/create-resource-8.png)
- You are also notified that during the resource creation, a Managed Service Identity (MSI) is enabled that lets you authenticate to cloud services. This identity exists for as long as the resource exists.
+ You are also notified that during the resource creation, a managed identity is enabled that lets you authenticate to cloud services. This identity exists for as long as the resource exists.
11. Select **Create**.
- The resource creation takes a few minutes. An MSI is also created that lets the Azure Stack Edge device communicate with the resource provider in Azure.
+ The resource creation takes a few minutes. A managed identity is also created that lets the Azure Stack Edge device communicate with the resource provider in Azure.
After the resource is successfully created and deployed, you're notified. Select **Go to resource**.
In this tutorial, you learned about Azure Stack Edge Pro topics such as:
Advance to the next tutorial to learn how to install Azure Stack Edge Pro. > [!div class="nextstepaction"]
-> [Install Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-install.md)
+> [Install Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-install.md)
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/overview.md
Title: Service overview for device builders description: Learn about the Defender for IoT features and services, and understand how Defender for IoT provides comprehensive IoT security. Previously updated : 05/27/2021 Last updated : 07/08/2021 # Welcome to Azure Defender for IoT for device builders
-Operational technology (OT) networks power many of the most critical aspects of our society. But many of these technologies were not designed with security in mind and can't be protected with traditional IT security controls. Meanwhile, the Internet of Things (IoT) is enabling a new wave of innovation with billions of connected devices, increasing the attack surface and risk.
-
-Azure Defender for IoT is a unified security solution for identifying IoT/OT devices, vulnerabilities, and threats. It enables you to secure your entire IoT/OT environment, whether you need to protect existing IoT/OT devices or build security into new IoT innovations.
-
-Azure Defender for IoT offers two sets of capabilities to fit your environment's needs.
-
-For end-user organizations with IoT/OT environments, Azure Defender for IoT delivers agentless, network-layer monitoring that:
--- Can be rapidly deployed.-- Integrates easily with diverse industrial equipment and SOC tools.-- Has zero impact on IoT/OT network performance or stability. -
-The platform can be deployed fully on-premises or in Azure-connected and hybrid environments.
-
-For IoT device builders, Azure Defender for IoT also offers lightweight a micro agent that supports standard IoT operating systems, such as Linux and RTOS. This lightweight agent helps ensure that security is built into your IoT/OT initiatives from the edge to the cloud. It includes source code for flexible, customizable deployment.
-
-## Agent-based solution
- Security is a near-universal concern for IoT implementers. IoT devices have unique needs for endpoint monitoring, security posture management, and threat detection ΓÇô all with highly specific performance requirements. The Azure Defender for IoT security agents allow you to build security directly into your new IoT devices and Azure IoT projects. The micro agent has flexible deployment options, including the ability to deploy as a binary package or modify source code. And the micro agent is available for standard IoT operating systems like Linux and Azure RTOS. The Azure Defender for IoT micro agent provides endpoint visibility into security posture management, threat detection, and integration into Microsoft's other security tools for unified security management.
-### Security posture management
+## Security posture management
Proactively monitor the security posture of your IoT devices. Azure Defender for IoT provides security posture recommendations based on the CIS benchmark, along with device-specific recommendations. Get visibility into operating system security, including OS configuration, firewall configuration, and permissions.
-### Endpoint IoT/OT threat detection
+## Endpoint IoT/OT threat detection
Detect threats like botnets, brute force attempts, crypto miners, and suspicious network activity. Create custom alerts to target the most important threats in your unique organization.
-### Flexible distribution and deployment models
+## Flexible distribution and deployment models
The Azure Defender for IoT micro agent includes source code, so you can incorporate the micro agent into firmware or customize it to include only what you need. It's also available as a binary package, or integrated directly into other Azure IoT solutions.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-install-software.md
To install:
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows selecting the version.":::
-1. In the Installation Wizard define the appliance profile and network properties.
+1. In the Installation Wizard, define the appliance profile and network properties.
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
To install the software:
:::image type="content" source="media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png" alt-text="Screenshot that shows the management console's sign-in screen.":::
-## Legacy devices
+## Legacy appliances
-This section describes devices that are no longer available for purchase, but are still supported by Azure Defender for IoT.
+This section describes installation procedures for appliances supported by Azure Defender for IoT, but that are **outdated**. It is **not recommended** to purchase these appliances.
### Nuvo 5006LP installation
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
To import:
:::image type="content" source="media/how-to-work-with-asset-inventory-information/add-new-file.png" alt-text="Upload of added files was successful.":::
+## View and delete inactive devices from the inventory
+
+You may want to view devices in your network that have been inactive and delete them.
+Devices may become inactive because of:
+- Misconfigured SPAN ports
+- Changes in network coverage
+- Unplugging from the network
+
+Deleting inactive devices helps:
+
+- Defender for IoT create a more accurate representation of current network activity
+- Better evaluate committed devices when managing subscriptions
+- Reduce clutter on your screen
+
+### View inactive devices
+
+You can filter the inventory to display devices that are inactive:
+
+- for 7 days or more
+- for 14 days or more
+- 30 days or more
+- 90 days or more
+
+**To filter the inventory:**
+
+1. Select the **Last Seen** filter icon in the Inventory.
+1. Select a filter option.
+1. Select **Apply**.
+
+### Delete inactive devices
+
+Devices you delete from the Inventory are removed from the map and won't be calculated when generating Defender for IoT reports, for example Data Mining, Risk Assessment, and Attack Vector reports.
+
+You will be prompted to record a reason for deleting devices. This information, as well as the time/date and number of devices deleted, appears in the Event timeline.
+
+**To delete devices from the inventory:**
+
+1. Select the **Last Seen** filter icon in the Inventory.
+1. Select a filter option.
+1. Select **Apply**.
+1. Select **Delete Inactive Devices**.
+1. In the confirmation dialog box that opens, enter the reason for the deletion and select **Delete**. All devices detected within the range of the filter will be deleted. If you delete a large number of devices, the delete process may take a few minutes.
+ ## Export device inventory information You can export device inventory information to an Excel file.
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
Title: Manage sensors from the on-premises management console description: Learn how to manage sensors from the management console, including updating sensor versions, pushing system settings to sensors, and enabling and disabling engines on sensors. Previously updated : 04/22/2021 Last updated : 07/08/2021
You can define the following sensor system settings from the management console:
- Port aliases
-To apply system settings:
+**To apply system settings**:
1. On the console's left pane, select **System Settings**.
To apply system settings:
You can update several sensors simultaneously from the on-premises management console.
-To update several sensors:
+**To update several sensors**:
1. Go to the [Azure portal](https://portal.azure.com/).
To update several sensors:
1. Select **Download** from the **Sensors** section and save the file.
-1. Sign in to the management console and select **System Settings**.
+1. Sign in to the management console, and select **System Settings**.
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/admin-system-settings.png" alt-text="Screenshot of the Administration menu to select System Settings.":::
To update several sensors:
1. Select **Save Changes**.
-1. On the sensor, select **System Settings**, and then select **Update**.
+1. On the management console, select **System Settings**.
+1. Under the Sensor version update section, select the :::image type="icon" source="../media/how-to-manage-sensors-from-the-on-premises-management-console/add-icon.png" border="false"::: button.
- :::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the update pane.":::
+ :::image type="content" source="../media/how-to-manage-sensors-from-the-on-premises-management-console/sendor-version-update-window.png" alt-text="In the Sensor version update window select the + icon to update all of the sensors connected to the management console.":::
9. An **Upload File** dialog box opens. Upload the file that you downloaded from the **Updates** page.
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
You can invoke the plugin in a Kusto query with the following command. There are
evaluate azure_digital_twins_query_request(<Azure-Digital-Twins-endpoint>, <Azure-Digital-Twins-query>) ```
-The plugin works by calling the [Azure Digital Twins query API](/rest/api/digital-twins/dataplane/query), and the [query language structure](concepts-query-language.md) is the same as when using the API.
+The plugin works by calling the [Azure Digital Twins query API](/rest/api/digital-twins/dataplane/query), and the [query language structure](concepts-query-language.md) is the same as when using the API, with one exception: use of the `*` wildcard in the `SELECT` clause is not supported. Instead, Azure Digital Twin queries that are executed using the plugin should use aliases in the `SELECT` clause.
+
+For example, consider the below Azure Digital Twins query that is executed using the API:
+
+```SQL
+SELECT * FROM DIGITAL TWINS
+```
+
+To execute that query when using the plugin, it should be rewritten like this:
+
+```SQL
+SELECT T FROM DIGITALTWINS T
+```
>[!IMPORTANT] >The user of the plugin must be granted the **Azure Digital Twins Data Reader** role or the **Azure Digital Twins Data Owner** role, as the user's Azure AD token is used to authenticate. Information on how to assign this role can be found in [Concepts: Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins).
genomics File Support Ticket Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/file-support-ticket-genomics.md
Last updated 05/23/2018
# How to contact Microsoft Genomics for support
-This overview describes how to file a support request to contact Microsoft Genomics. This can be helpful if you are not able to troubleshoot your issue using the [troubleshooting guide](troubleshooting-guide-genomics.md) or the [FAQ](frequently-asked-questions-genomics.md).
+This overview describes how to file a support request to contact Microsoft Genomics. This can be helpful if you are not able to troubleshoot your issue using the [troubleshooting guide](troubleshooting-guide-genomics.md) or the [FAQ](frequently-asked-questions-genomics.yml).
## File a support ticket through the Azure portal
Last, add your contact information and select `Create` at the bottom of the scre
![Support request contact](./media/file-support-ticket/support-request-contact.png "Support request contact") ## Next steps
-In this article, you learned how to submit a support request. You can also resolve common issues using our [FAQ](frequently-asked-questions-genomics.md) and our [troubleshooting guide](troubleshooting-guide-genomics.md).
+In this article, you learned how to submit a support request. You can also resolve common issues using our [FAQ](frequently-asked-questions-genomics.yml) and our [troubleshooting guide](troubleshooting-guide-genomics.md).
genomics Frequently Asked Questions Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/frequently-asked-questions-genomics.md
- Title: Common questions - FAQ-
-description: Get answers to common questions related to using the Microsoft Genomics service, including technical information, SLA, and billing.
------ Previously updated : 12/07/2017--
-# Microsoft Genomics: Common questions
-
-This article lists the top queries you might have related to Microsoft Genomics. For more information on the Microsoft Genomics service, see [What is Microsoft Genomics?](overview-what-is-genomics.md). For more information about troubleshooting, see our [Troubleshooting Guide](troubleshooting-guide-genomics.md).
--
-## How do I run GATK4 workflows on Microsoft Genomics?
-In the Microsoft Genomics service's config.txt file, specify the process_name to `gatk4`. Note that you will be billed at regular billing rates.
-
-## How do I enable output compression?
-You can compress the output vcf or gvcf using an optional argument for output compression. This is equivalent to running `-bgzip` followed by `-tabix` on the vcf or gvcf output, to produce `.gz` (bgzip output) and `.tbi` (tabix output) files. `bgzip` compresses the vcf or gvcf file, and `tabix` creates an index for the compressed file. The argument is a boolean, which is set to `false` by default for vcf output, and to `true` by default for gcvf output. To use on the command line, specify `-bz` or `--bgzip-output` as `true` (run bgzip and tabix) or `false`. To use this argument in the config.txt file, add `bgzip_output: true` or `bgzip_output: false` to the file.
-
-## What is the SLA for Microsoft Genomics?
-We guarantee that 99.9% of the time Microsoft Genomics service will be available to receive workflow API requests. For more information, see [SLA](https://azure.microsoft.com/support/legal/sla/genomics/v1_0/).
-
-## How does the usage of Microsoft Genomics show up on my bill?
-Microsoft Genomics bills based on the number of gigabases processed per workflow. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/genomics/).
--
-## Where can I find a list of all possible commands and arguments for the `msgen` client?
-You can get a full list of available commands and arguments by running `msgen help`. If no further arguments are provided, it shows a list of available help sections, one for each of `submit`, `list`, `cancel`, and `status`. To get help for a specific command, type `msgen help command`; for example, `msgen help submit` lists all of the submit options.
-
-## What are the most commonly used commands for the `msgen` client?
-The most commonly used commands are arguments for the `msgen` client include:
-
- |**Command** | **Field description** |
- |:--|:- |
- |`list` |Returns a list of jobs you have submitted. For arguments, see `msgen help list`. |
- |`submit` |Submits a workflow request to the service. For arguments, see `msgen help submit`.|
- |`status` |Returns the status of the workflow specified by `--workflow-id`. See also `msgen help status`. |
- |`cancel` |Sends a request to cancel processing of the workflow specified by `--workflow-id`. See also `msgen help cancel`. |
-
-## Where do I get the value for `--api-url-base`?
-Go to Azure portal and open your Genomics account page. Under the **Management** heading, choose **Access keys**. There, you find both the API URL and your access keys.
-
-## Where do I get the value for `--access-key`?
-Go to Azure portal and open your Genomics account page. Under the **Management** heading, choose **Access keys**. There, you find both the API URL and your access keys.
-
-## Why do I need two access keys?
-You need two access keys in case you want to update (regenerate) them without interrupting usage of the service. For example, if you want to update the first key, you should have all new workflows use the second key. Then, wait for all the workflows using the first key to finish before updating the first key.
-
-## Do you save my storage account keys?
-Your storage account key is used to create short-term access tokens for the Microsoft Genomics service to read your input files and write the output files. The default token duration is 48 hours. The token duration can be changed with the `-sas/--sas-duration` option of the submit command; the value is in hours.
-
-## Does Microsoft Genomics store customer data?
-
-No. Microsoft Genomics does not store any customer data.
-
-## What genome references can I use?
-
-These references are supported:
-
- |Reference | Value of `-pa/--process-args` |
- |:- |:- |
- |b37 | `R=b37m1` |
- |hg38 | `R=hg38m1` |
- |hg38 (no alt analysis) | `R=hg38m1x` |
- |hg19 | `R=hg19m1` |
-
-## How do I format my command-line arguments as a config file?
-
-msgen understands configuration files in the following format:
-* All options are provided as key-value pairs with values separated from keys by a colon.
- Whitespace is ignored.
-* Lines starting with `#` are ignored.
-* Any command-line argument in the long format can be converted to a key by stripping its leading dashes and replacing dashes between words with underscores. Here are some conversion examples:
-
- |Command-line argument | Configuration file line |
- |:- |:- |
- |`-u/--api-url-base https://url` | *api_url_base:https://url* |
- |`-k/--access-key KEY` | *access_key:KEY* |
- |`-pa/--process-args R=B37m1` | *process_args:R-b37m1* |
-
-## Next steps
-
-Use the following resources to get started with Microsoft Genomics:
-- Get started by running your first workflow through the Microsoft Genomics service. [Run a workflow through the Microsoft Genomics service](quickstart-run-genomics-workflow-portal.md)-- Submit your own data for processing by the Microsoft Genomics service: [paired FASTQ](quickstart-input-pair-FASTQ.md) | [BAM](quickstart-input-BAM.md) | [Multiple FASTQ or BAM](quickstart-input-multiple.md) -
genomics Quickstart Input Bam https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-bam.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded a BAM file into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For additional information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.md).
+In this article, you uploaded a BAM file into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For additional information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Multiple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-multiple.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded multiple BAM files or paired FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For more information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see the [FAQ](frequently-asked-questions-genomics.md).
+In this article, you uploaded multiple BAM files or paired FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. For more information regarding workflow submission and other commands you can use with the Microsoft Genomics service, see the [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Pair Fastq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-pair-fastq.md
output_storage_account_container: outputs
Submit the `config.txt` file with this invocation: `msgen submit -f config.txt` ## Next steps
-In this article, you uploaded a pair of FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. To learn more about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.md).
+In this article, you uploaded a pair of FASTQ files into Azure Storage and submitted a workflow to the Microsoft Genomics service through the `msgen` python client. To learn more about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
genomics Quickstart Input Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-sas.md
msgen submit -f [full path to your config file]
``` ## Next steps
-In this article, you used SAS tokens instead of the account keys to submit a workflow to the Microsoft Genomics service through the `msgen` Python client. For additional information about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.md).
+In this article, you used SAS tokens instead of the account keys to submit a workflow to the Microsoft Genomics service through the `msgen` Python client. For additional information about workflow submission and other commands you can use with the Microsoft Genomics service, see our [FAQ](frequently-asked-questions-genomics.yml).
genomics Troubleshooting Guide Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/troubleshooting-guide-genomics.md
Last updated 10/29/2018
Here are a few troubleshooting tips for some of the common issues that you might face when using the Microsoft Genomics service, MSGEN.
- For FAQ, not related to troubleshooting, see [Common questions](frequently-asked-questions-genomics.md).
+ For FAQ, not related to troubleshooting, see [Common questions](frequently-asked-questions-genomics.yml).
## Step 1: Locate error codes associated with the workflow You can locate the error messages associated with the workflow by:
If you continue to have job failures, or if you have any other questions, contac
## Next steps
-In this article, you learned how to troubleshoot and resolve common issues with the Microsoft Genomics service. For more information and more general FAQ, see [Common questions](frequently-asked-questions-genomics.md).
+In this article, you learned how to troubleshoot and resolve common issues with the Microsoft Genomics service. For more information and more general FAQ, see [Common questions](frequently-asked-questions-genomics.yml).
genomics Version Release History Genomics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/version-release-history-genomics.md
The current Python client is version 0.9.0. It was released February 6 2019 and
New versions of the Microsoft Genomics Python client are released about once per year. As new versions of the Microsoft Genomics Python client are released, a list of fixes and features is updated here. When new versions are released, prior versions should continue to be supported for at least 90 days. When prior versions are no longer supported, it will be indicated on this page. ### Version 0.9.0
-Version 0.9.0 includes support for output compression. This is equivalent to running `-bgzip` followed by `-tabix` on the vcf or gvcf output. For more information, see [Frequently asked questions](frequently-asked-questions-genomics.md).
+Version 0.9.0 includes support for output compression. This is equivalent to running `-bgzip` followed by `-tabix` on the vcf or gvcf output. For more information, see [Frequently asked questions](frequently-asked-questions-genomics.yml).
### Version 0.8.1 Version 0.8.1 includes minor bug fixes.
hdinsight Hdinsight Hadoop Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-add-storage.md
If you change the key for a storage account, HDInsight can no longer access the
Running the script action again **doesn't** update the key, as the script checks to see if an entry for the storage account already exists. If an entry already exists, it doesn't make any changes.
-To work around this problem:
-1. Remove the storage account.
-1. Add the storage account.
+To work around this problem:
-> [!IMPORTANT]
-> Rotating the storage key for the primary storage account attached to a cluster is not supported.
+* See [Update storage account access keys](hdinsight-rotate-storage-keys.md) on how to rotate the access keys.
+
+* You can also remove the storage account and then add back the storage account.
## Next steps
hdinsight Hdinsight Rotate Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-rotate-storage-keys.md
+
+ Title: Update Azure Storage account access key in Azure HDInsight
+description: Learn how to update Azure Storage account access key in Azure HDInsight cluster.
++++ Last updated : 06/29/2021++
+# Update Azure storage account access keys in HDInsight cluster
+
+In this article, you will learn how to rotate Azure Storage account access keys for the primary or secondary storage accounts in Azure HDInsight.
+
+>[!CAUTION]
+> Directly rotating the access key on the storage side will make the HDInsight cluster inaccessible.
+
+## Prerequisites
+
+* We are going to use an approach to rotate the primary and secondary access keys of the storage account in a staggered, alternating fashion to ensure HDInsight cluster is accessible throughout the process.
+
+ Here is an example on how to use primary and secondary storage access keys and set up rotation policies on them:
+ 1. Use access key1 on the storage account when creating HDInsight cluster.
+ 1. Set up rotation policy for access key2 every N days. As part of this rotation update HDInsight to use access key1 and then rotate access key2 on storage account.
+ 1. Set up rotation policy for access key1 every N/2 days. As part of this rotation update HDInsight to use access key2 and then rotate access key1 on storage account.
+ 1. With above approach access key1 will be rotated N/2, 3N/2 etc. days and access key2 will be rotated N, 2N, 3N etc. days.
+
+* To set up periodic rotation of storage account keys see [Automate the rotation of a secret](../key-vault/secrets/tutorial-rotation-dual.md).
+
+## Update storage account access keys
+
+Use [Script Action](hdinsight-hadoop-customize-cluster-linux.md#script-action-to-a-running-cluster) to update the keys with the following considerations:
+
+|Property | Value |
+|||
+|Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxaddstorageaccountv01/update-storage-account-v01.sh`|
+|Node type(s)|Head|
+|Parameters|`ACCOUNTNAME` `ACCOUNTKEY` `-p` (optional)|
+
+* `ACCOUNTNAME` is the name of the storage account on the HDInsight cluster.
+* `ACCOUNTKEY` is the access key for `ACCOUNTNAME`.
+* `-p` is optional. If specified, the key isn't encrypted and is stored in the core-site.xml file as plain text.
+
+## Known issues
+
+The preceding script directly updates the access key on the cluster side only and does not renew a copy on the HDInsight Resource provider side. Therefore, the script action hosted in the storage account will fail after the access key is rotated.
+
+Workaround:
+Use [SAS URIs](hdinsight-storage-sharedaccesssignature-permissions.md) for script actions or make the scripts publicly accessible.
+
+## Next steps
+
+* [Add additional storage accounts](hdinsight-hadoop-add-storage.md)
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-architecture.md
In an Azure IoT Central application, you can [continuously export your data](how
## Batch device updates
-In an Azure IoT Central application, you can [create and run jobs](howto-run-a-job.md) to manage connected devices. These jobs let you do bulk updates to device properties or settings, or run commands. For example, you can create a job to increase the fan speed for multiple refrigerated vending machines.
+In an Azure IoT Central application, you can [create and run jobs](howto-manage-devices-in-bulk.md) to manage connected devices. These jobs let you do bulk updates to device properties or settings, or run commands. For example, you can create a job to increase the fan speed for multiple refrigerated vending machines.
## Role-based access control (RBAC)
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-get-connected.md
This article describes how devices connect to an Azure IoT Central application.
IoT Central supports the following two device registration scenarios: - *Automatic registration*. The device is registered automatically when it first connects. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. An OEM generates suitable device credentials, and configures the devices in the factory. Optionally, you can require an operator to approve the device before it starts sending data. This scenario requires you to configure an X.509 or SAS _group enrollment_ in your application.-- *Manual registration*. Operators either register individual devices on the **Devices** page, or [import a CSV file](howto-manage-devices.md#import-devices) to bulk register devices. In this scenario you can use X.509 or SAS _group enrollment_, or X.509 or SAS _individual enrollment_.
+- *Manual registration*. Operators either register individual devices on the **Devices** page, or [import a CSV file](howto-manage-devices-in-bulk.md#import-devices) to bulk register devices. In this scenario you can use X.509 or SAS _group enrollment_, or X.509 or SAS _individual enrollment_.
Devices that connect to IoT Central should follow the *IoT Plug and Play conventions*. One of these conventions is that a device should send the _model ID_ of the device model it implements when it connects. The model ID enables the IoT Central application to associate the device with the correct device template.
The IoT Central application uses the model ID sent by the device to [associate t
### Bulk register devices in advance
-To register a large number of devices with your IoT Central application, use a CSV file to [import device IDs and device names](howto-manage-devices.md#import-devices).
+To register a large number of devices with your IoT Central application, use a CSV file to [import device IDs and device names](howto-manage-devices-in-bulk.md#import-devices).
-If your devices use SAS tokens to authenticate, [export a CSV file from your IoT Central application](howto-manage-devices.md#export-devices). The exported CSV file includes the device IDs and the SAS keys.
+If your devices use SAS tokens to authenticate, [export a CSV file from your IoT Central application](howto-manage-devices-in-bulk.md#export-devices). The exported CSV file includes the device IDs and the SAS keys.
If your devices use X.509 certificates to authenticate, generate X.509 leaf certificates for your devices using the root or intermediate certificate in you uploaded to your X.509 enrollment group. Use the device IDs you imported as the `CNAME` value in the leaf certificates.
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-build-iotc-device-bridge.md
You can include a `modelId` field in the body. Use this field to associate the d
The `deviceId` must be alphanumeric, lowercase, and may contain hyphens.
-If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassociated device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices.md).
+If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassociated device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices-individually.md).
In [V2 applications](howto-faq.md#how-do-i-get-information-about-my-application), the new device appears on the **Device Explorer > Unassociated devices** page. Select **Associate** and choose a device template to start receiving incoming telemetry from the device.
The device bridge only forwards messages to IoT Central, and doesn't send messag
Now that you've learned how to deploy the IoT Central device bridge, here's the suggested next step: > [!div class="nextstepaction"]
-> [Manage your devices](howto-manage-devices.md)
+> [Manage your devices](howto-manage-devices-individually.md)
iot-central Howto Connect Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-powerbi.md
The [Power BI Solution for Azure IoT Central V3](https://appsource.microsoft.com
## Next steps
-Now that you've learned how to visualize your data in Power BI, the suggested next step is to learn [How to manage devices](howto-manage-devices.md).
+Now that you've learned how to visualize your data in Power BI, the suggested next step is to learn [How to manage devices](howto-manage-devices-individually.md).
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-edit-device-template.md
You can create multiple versions of the device template. Over time, you'll have
## Next steps
-If you're an operator or solution builder, a suggested next step is to learn [how to manage your devices](./howto-manage-devices.md).
+If you're an operator or solution builder, a suggested next step is to learn [how to manage your devices](./howto-manage-devices-individually.md).
If you're a device developer, a suggested next step is to read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md).
iot-central Howto Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-faq.md
If the device status is **Unassociated**, it means the device connecting to IoT
- A set of devices is added using **Import** on the **Devices** page without specifying the device template. - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
-The operator can associate a device to a device template from the **Devices** page using the **Migrate** button. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices.md).
+The operator can associate a device to a device template from the **Devices** page using the **Migrate** button. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices-individually.md).
## Where can I learn more about IoT Hub?
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices-in-bulk.md
+
+ Title: Create and run jobs in your Azure IoT Central application | Microsoft Docs
+description: Azure IoT Central jobs allow for bulk device management capabilities, such as updating properties or running a command.
++++ Last updated : 07/08/2021+++
+# Manage devices in bulk in your Azure IoT Central application
+
+You can use Azure IoT Central to manage your connected devices at scale through jobs. Jobs let you do bulk updates to device and cloud properties and run commands. You can also use CSV files to import and export devices in bulk. This article shows you how to get started with using jobs in your own application and how to use the import and export features.
+
+## Create and run a job
+
+The following example shows you how to create and run a job to set the light threshold for a group of logistic gateway devices. You use the job wizard to create and run jobs. You can save a job to run later.
+
+1. On the left pane, select **Jobs**.
+
+1. Select **+ New job**.
+
+1. On the **Configure your job** page, enter a name and description to identify the job you're creating.
+
+1. Select the target device group that you want your job to apply to. You can see how many devices your job configuration applies to below your **Device group** selection.
+
+1. Choose **Cloud property**, **Property**, or **Command** as the **Job type**:
+
+ To configure a **Property** job, select a property and set its new value. To configure a **Command** job, choose the command to run. A property job can set multiple properties.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/configure-job.png" alt-text="Screenshot that shows selections for creating a property job called Set Light Threshold":::
+
+ Select **Save and exit** to add the job to the list of saved jobs on the **Jobs** page. You can later return to a job from the list of saved jobs.
+
+1. Select **Next** to move to the **Delivery Options** page. The **Delivery Options** page lets you set the delivery options for this job: **Batches** and **Cancellation threshold**.
+
+ Batches let you stagger jobs for large numbers of devices. The job is divided into multiple batches and each batch contains a subset of the devices. The batches are queued and run in sequence.
+
+ The cancellation threshold lets you automatically cancel a job if the number of errors exceeds your set limit. The threshold can apply to all the devices in the job, or to individual batches.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-delivery-options.png" alt-text="Screenshot of job wizard delivery options page":::
+
+1. Select **Next** to move to the **Schedule** page. The **Schedule** page lets you enable a schedule to run the job in the future:
+
+ Choose a recurrence option for the schedule. You can set up a job to run:
+
+ * One-time
+ * Daily
+ * Weekly
+
+ Set a start date and time for a scheduled job. The date and time is specific to your time zone, and not to the device's local time.
+
+ To end a recurring schedule, choose:
+
+ * **On this day** to set an end date for the schedule.
+ * **After** to set the number of times to run the job.
+
+ Scheduled jobs always run on the devices in a device group, even if the device group membership changes over time.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule.png" alt-text="Screenshot of job wizard schedule options page":::
+
+1. Select **Next** to move to the **Review** page. The **Review** page shows the job configuration details. Select **Schedule** to schedule the job:
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule-review.png" alt-text="Screenshot of scheduled job wizard review page":::
+
+1. The job details page shows information about scheduled jobs. When the scheduled job executes, you see a list of the job instances. The scheduled job execution is also be part of the **Last 30-day** job list.
+
+ On this page, you can **Unschedule** the job or **Edit** the scheduled job. You can return to a scheduled job from the list of scheduled jobs.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/job-schedule-details.png" alt-text="Screenshot of scheduled job details page":::
+
+1. In the job wizard, you can choose to not schedule a job, and run it immediately. The following screenshot shows a job without a schedule that's ready to run immediately. Select **Run** to run the job:
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/job-wizard-schedule-immediate.png" alt-text="Screenshot of job wizard review page":::
+
+1. A job goes through *pending*, *running*, and *completed* phases. The job execution details contain result metrics, duration details, and a device list grid.
+
+ When the job is complete, you can select **Results log** to download a CSV file of your job details, including the devices and their status values. This information can be useful for troubleshooting.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/download-details.png" alt-text="Screenshot that shows device status":::
+
+1. The job now appears in **Last 30 days** list on the **Jobs** page. This page shows currently running jobs and the history of any previously run or saved jobs.
+
+ > [!NOTE]
+ > You can view 30 days of history for your previously run jobs.
+
+## Manage jobs
+
+To stop a running job, open it and select **Stop**. The job status changes to reflect that the job is stopped. The **Summary** section shows which devices have completed, have failed, or are still pending.
++
+When a job is in a stopped state, you can select **Continue** to resume running the job. The job status changes to reflect that the job is now running again. The **Summary** section continues to update with the latest progress.
++
+## Copy a job
+
+To copy an existing job, select an executed job. Select **Copy** on the job results page or jobs details page:
++
+A copy of the job configuration opens for you to edit, and **Copy** is appended to the job name.
+
+## View job status
+
+After a job is created, the **Status** column updates with the latest job status message. The following table lists the possible *job status* values:
+
+| Status message | Status meaning |
+| -- | - |
+| Completed | This job ran on all devices. |
+| Failed | This job failed and didn't fully run on devices. |
+| Pending | This job hasn't yet begun running on devices. |
+| Running | This job is currently running on devices. |
+| Stopped | A user has manually stopped this job. |
+| Canceled | This job was canceled because the threshold set on the **Delivery options** page was exceeded. |
+
+The status message is followed by an overview of the devices in the job. The following table lists the possible *device status* values:
+
+| Status message | Status meaning |
+| -- | |
+| Succeeded | The number of devices that the job successfully ran on. |
+| Failed | The number of devices that the job has failed to run on. |
+
+To view the status of the job and all the affected devices, open the job. Next to each device name, you see one of the following status messages:
+
+| Status message | Status meaning |
+| -- | -- |
+| Completed | The job ran on this device. |
+| Failed | The job failed to run on this device. The error message shows more information. |
+| Pending | The job hasn't yet run on this device. |
+
+To download a CSV file that includes the job details and the list of devices and their status values, select **Results log**.
+
+## Filter the device list
+
+You can filter the device list on the **Job details** page by selecting the filter icon. You can filter on the **Device ID** or **Status** field:
++
+## Customize columns in the device list
+
+You can add columns to the device list by selecting the column options icon:
++
+Use the **Column options** dialog box to choose the device list columns. Select the columns that you want to display, select the right arrow, and then select **OK**. To select all the available columns, choose **Select all**. The selected columns appear in the device list.
+
+Selected columns persist across a user session or across user sessions that have access to the application.
+
+## Rerun jobs
+
+You can rerun a job that has failed devices. Select **Rerun on failed**:
++
+Enter a job name and description, and then select **Rerun job**. A new job is submitted to retry the action on failed devices.
+
+> [!NOTE]
+> You can't run more than five jobs at the same time from an Azure IoT Central application.
+>
+> When a job is complete and you delete a device that's in the job's device list, the device entry appears as deleted in the device name. The details link isn't available for the deleted device.
+
+## Import devices
+
+To connect large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/bulk-upload-devices). The CSV file should include the following column headers:
+
+| Column | Description |
+| - | - |
+| IOTC_DEVICEID | The device ID is a unique identified this device will use to connect. The device ID can contain letters, numbers, and the `-` character without any spaces. |
+| IOTC_DEVICENAME | Optional. The device name is a friendly name that will be displayed throughout the application. If not specified, the same as the device ID. |
+
+To bulk-register devices in your application:
+
+1. Choose **Devices** on the left pane.
+
+1. On the left panel, choose the device template for which you want to bulk create the devices.
+
+ > [!NOTE]
+ > If you don't have a device template yet then you can import devices under **All devices** and register them without a template. After devices have been imported, you can then migrate them to a template.
+
+1. Select **Import**.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/bulk-import-1.png" alt-text="Screenshot showing import action settings.":::
+
+1. Select the CSV file that has the list of Device IDs to be imported.
+
+1. Device import starts once the file has been uploaded. You can track the import status in the Device Operations panel. This panel appears automatically after the import starts or you can access it through the bell icon in the top right-hand corner.
+
+1. Once the import completes, a success message is shown in the Device Operations panel.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/bulk-import-2.png" alt-text="Screenshot showing import success.":::
+
+If the device import operation fails, you see an error message on the Device Operations panel. A log file capturing all the errors is generated that you can download.
+
+## Export devices
+
+To connect a real device to IoT Central, you need its connection string. You can export device details in bulk to get the information you need to create device connection strings. The export process creates a CSV file with the device identity, device name, and keys for all the selected devices.
+
+To bulk export devices from your application:
+
+1. Choose **Devices** on the left pane.
+
+1. On the left pane, choose the device template from which you want to export the devices.
+
+1. Select the devices that you want to export and then select the **Export** action.
+
+ :::image type="content" source="media/howto-manage-devices-in-bulk/export-1.png" alt-text="Screenshot showing export action settings.":::
+
+1. The export process starts. You can track the status using the Device Operations panel.
+
+1. When the export completes, a success message is shown along with a link to download the generated file.
+
+1. Select the **Download File** link to download the file to a local folder on the disk.
+
+ ![Export Success](./media/howto-manage-devices-in-bulk/export-2.png)
+
+1. The exported CSV file contains the following columns: device ID, device name, device keys, and X509 certificate thumbprints:
+
+ * IOTC_DEVICEID
+ * IOTC_DEVICENAME
+ * IOTC_SASKEY_PRIMARY
+ * IOTC_SASKEY_SECONDARY
+ * IOTC_X509THUMBPRINT_PRIMARY
+ * IOTC_X509THUMBPRINT_SECONDARY
+
+For more information about connection strings and connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
+
+## Next steps
+
+Now that you've learned how to manage devices in bulk in your Azure IoT Central application, a suggested next step is to learn how to [Edit a device template](howto-edit-device-template.md).
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices-individually.md
+
+ Title: Manage devices individually in your Azure IoT Central application | Microsoft Docs
+description: Learn how to manage devices individually in your Azure IoT Central application. Create, delete, and update devices.
++ Last updated : 07/08/2021+++++
+# Operator
++
+# Manage individual devices in your Azure IoT Central application
+
+This article describes how you manage devices in your Azure IoT Central application. You can:
+
+- Use the **Devices** page to view, add, and delete devices connected to your Azure IoT Central application.
+- Keep your device metadata up to date by changing the values stored in the device properties from your views.
+- Control the behavior of your devices by updating a setting on a specific device from your views.
+
+To learn how to manage custom groups of devices, see [Tutorial: Use device groups to analyze device telemetry](tutorial-use-device-groups.md).
+
+## View your devices
+
+To view an individual device:
+
+1. Choose **Devices** on the left pane. Here you see a list of all devices and of your device templates.
+
+1. Choose a device template.
+
+1. In the right-hand pane of the **Devices** page, you see a list of devices created from that device template. Choose an individual device to see the device details page for that device:
+
+ :::image type="content" source="media/howto-manage-devices-individually/device-list.png" alt-text="Screenshot showing device list.":::
+
+## Add a device
+
+To add a device to your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+
+1. Choose the device template from which you want to create a device.
+
+1. Choose + **New**.
+
+1. Turn the **Simulated** toggle to **On** or **Off**. A real device is for a physical device that you connect to your Azure IoT Central application. A simulated device has sample data generated for you by Azure IoT Central.
+
+1. Select **Create**.
+
+1. This device now appears in your device list for this template. Select the device to see the device details page that contains all views for the device.
+
+## Migrate devices to a template
+
+If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be associated with a template to explore the data and other details about the device. Follow these steps to associate devices with a template:
+
+1. Choose **Devices** on the left pane.
+
+1. On the left panel, choose **All devices**:
+
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-1.png" alt-text="Screenshot showing unassociated devices.":::
+
+1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassociated** for any of your devices.
+
+1. Select the devices you want to associate with a template:
+
+1. Select **Migrate**:
+
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-2.png" alt-text="Screenshot showing how to associate a device.":::
+
+1. Choose the template from the list of available templates and select **Migrate**.
+
+1. The selected devices are associated with the device template you chose.
+
+## Delete a device
+
+To delete either a real or simulated device from your Azure IoT Central application:
+
+1. Choose **Devices** on the left pane.
+
+1. Choose the device template of the device you want to delete.
+
+1. Use the filter tools to filter and search for your devices. Check the box next to the devices to delete.
+
+1. Choose **Delete**. You can track the status of this deletion in your Device Operations panel.
+
+## Change a property
+
+Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
+
+1. Choose **Devices** on the left pane.
+
+1. Choose the device template of the device whose properties you want to change and select the target device.
+
+1. Choose the view that contains properties for your device, this view enables you to input values and select **Save** at the top of the page. Here you see the properties your device has and their current values. Cloud properties and writable properties have editable fields, while device properties are read-only. For writable properties, you can see their sync status at the bottom of the field.
+
+1. Modify the properties to the values you need. You can modify multiple properties at a time and update them all at the same time.
+
+1. Choose **Save**. If you saved writable properties, the values are sent to your device. When the device confirms the change for the writable property, the status returns back to **synced**. If you saved a cloud property, the value is updated.
+
+## Next steps
+
+Now that you've learned how to manage devices individually, the suggested next step is to learn how to [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md)).
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
The following table describes the fields in the previous JSON snippet:
| `displayName` | The display name for the job in your application. | | `description` | A description of the job. | | `group` | The ID of the device group that the job applies to. Use the `deviceGroups` preview REST API to get a list of the device groups in your application. |
-| `status` | The [status](howto-run-a-job.md#view-job-status) of the job. One of `complete`, `cancelled`, `failed`, `pending`, `running`, `stopped`. |
-| `batch` | If present, this section defines how to [batch](howto-run-a-job.md#create-and-run-a-job) the devices in the job. |
+| `status` | The [status](howto-manage-devices-in-bulk.md#view-job-status) of the job. One of `complete`, `cancelled`, `failed`, `pending`, `running`, `stopped`. |
+| `batch` | If present, this section defines how to [batch](howto-manage-devices-in-bulk.md#create-and-run-a-job) the devices in the job. |
| `batch/type` | The size of each batch is either a `percentage` of the total devices in the group or a `number` of devices. | | `batch/value` | Either the percentage of devices or the number of devices in each batch. |
-| `cancellationThreshold` | If present, this section defines the [cancellation threshold](howto-run-a-job.md#create-and-run-a-job) for the job. |
+| `cancellationThreshold` | If present, this section defines the [cancellation threshold](howto-manage-devices-in-bulk.md#create-and-run-a-job) for the job. |
| `cancellationThreshold/batch` | `true` or `false`. If true, the cancellation threshold is set for each batch. If `false`, the cancellation threshold applies to the whole job. | | `cancellationThreshold/type` | The cancellation threshold for the job is either a `percentage` or a `number` of devices. | | `cancellationThreshold/value` | Either the percentage of devices or the number of devices that define the cancellation threshold. |
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
For more detailed information, an operator can use device groups and the built-i
To manage individual devices, an operator can use device views to set device and cloud properties, and call device commands. Examples, include the **Manage device** and **Commands** views in the previous screenshot.
-To manage devices in bulk, an operator can create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-run-a-job.md).
+To manage devices in bulk, an operator can create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
## Troubleshoot and remediate issues
The operator is responsible for the health of the application and its devices. T
## Add and remove devices
-The operator can add and remove devices to your IoT Central application either individually or in bulk. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices.md).
+The operator can add and remove devices to your IoT Central application either individually or in bulk. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices-individually.md).
## Personalize
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-tour.md
Analytics exposes rich capabilities to analyze historical trends and correlate v
:::image type="content" source="Media/overview-iot-central-tour/jobs.png" alt-text="Jobs Page":::
-This page lets you view and create jobs that can be used for bulk device management operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-run-a-job.md) article.
+This page lets you view and create jobs that can be used for bulk device management operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-manage-devices-in-bulk.md) article.
### Device templates
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
This article outlines, for IoT Central:
The IoT Central documentation refers to four user roles that interact with an IoT Central application: - A _solution builder_ is responsible for [creating an application](quick-deploy-iot-central.md), [configuring rules and actions](quick-configure-rules.md), [defining integrations with other services](quick-export-data.md), and further customizing the application for operators and device developers.-- An _operator_ [manages the devices](howto-manage-devices.md) connected to the application.
+- An _operator_ [manages the devices](howto-manage-devices-individually.md) connected to the application.
- An _administrator_ is responsible for administrative tasks such as managing [user roles and permissions](howto-administer.md) within the application. - A _device developer_ [creates the code that runs on a device](concepts-telemetry-properties-commands.md) or [IoT Edge module](concepts-iot-edge.md) connected to your application.
You can also customize the IoT Central application UI for the operators who are
## Manage your devices
-As an operator, you use the IoT Central application to [manage the devices](howto-manage-devices.md) in your IoT Central solution. Operators do tasks such as:
+As an operator, you use the IoT Central application to [manage the devices](howto-manage-devices-individually.md) in your IoT Central solution. Operators do tasks such as:
- Monitoring the devices connected to the application. - Troubleshooting and remediating issues with devices.
Build [custom rules](tutorial-create-telemetry-rules.md) based on device state a
### Jobs
-[Jobs](howto-run-a-job.md) let you apply single or bulk updates to devices by setting properties or calling commands.
+[Jobs](howto-manage-devices-in-bulk.md) let you apply single or bulk updates to devices by setting properties or calling commands.
## Integrate with other services
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/troubleshoot-connection.md
If you prefer to use a GUI, use the IoT Central **Raw data** view to see if some
When you've detected the issue, you may need to update device firmware, or create a new device template that models previously unmodeled data.
-If you chose to create a new template that models the data correctly, migrate devices from your old template to the new template. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices.md).
+If you chose to create a new template that models the data correctly, migrate devices from your old template to the new template. To learn more, see [Manage devices in your Azure IoT Central application](howto-manage-devices-individually.md).
## Next steps
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-consumption-monitoring.md
In Azure IoT Central, you can create simulated devices to test your device templ
Add new devices by selecting **+ New** on the **Devices** tab.
-To learn more, see [Manage devices](../core/howto-manage-devices.md).
+To learn more, see [Manage devices](../core/howto-manage-devices-individually.md).
## Explore rules
In Azure IoT Central, jobs allow you to trigger device or cloud property updates
1. Select **Jobs** on the left pane. 1. Select **+ New**, and configure one or more jobs.
-To learn more, see [How to run a job](../core/howto-run-a-job.md).
+To learn more, see [How to run a job](../core/howto-manage-devices-in-bulk.md).
## Customize your application
iot-edge How To Auto Provision X509 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
You can use either PowerShell or Windows Admin Center to provision your IoT Edge
For PowerShell, run the following command with the placeholder values updated with your own values: ```powershell
-Provision-EflowVm -provisioningType DPSx509 -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -identityCertLocWin <ABSOLUTE_CERT_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityPkLocWin <ABSOLUTE_PRIVATE_KEY_SOURCE_PATH_ON_WINDOWS_MACHINE> -identityCertLocVm <ABSOLUTE_CERT_DEST_PATH_ON_LINUX_MACHINE -identityPkLocVm <ABSOLUTE_PRIVATE_KEY_DEST_PATH_ON_LINUX_MACHINE>
+Provision-EflowVm -provisioningType DPSX509 -ΓÇïscopeId <ID_SCOPE_HERE> -identityCertPath <ABSOLUTE_CERT_DEST_PATH_ON_WINDOWS_HOST> -identityPrivKeyPath <ABSOLUTE_PRIVATE_KEY_DEST_PATH_ON_WINDOWS_HOST>
``` ### Windows Admin Center
iot-edge How To Install Iot Edge On Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-on-windows.md
Install IoT Edge for Linux on Windows onto your target device if you have not al
```powershell $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi')) $ProgressPreference = 'SilentlyContinue'
- ΓÇïInvoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
+ Invoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
``` 1. Install IoT Edge for Linux on Windows on your device.
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
If you don't have the **AzureEflow** folder in your PowerShell directory, use th
```powershell $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi')) $ProgressPreference = 'SilentlyContinue'
- ΓÇïInvoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
+ Invoke-WebRequest "https://aka.ms/AzEflowMSI" -OutFile $msiPath
``` 1. Install IoT Edge for Linux on Windows on your device.
For more information, use the command `Get-Help Get-EflowVm -full`.
## Get-EflowVmAddr
-The **Get-EflowVmAddr** command is used to query the virtual machine's current IP and MAC address. This command exists to account for the fact that the IP and MAC address can change over time.
+The **Get-EflowVmAddr** command is used to query the virtual machine's current IP and MAC address. This command exists to account for the fact that the IP and MAC address can change over time.
For additional information, use the command `Get-Help Get-EflowVmAddr -full`. - ## Get-EflowVmFeature The **Get-EflowVmFeature** command returns the status of the enablement of IoT Edge for Linux on Windows features.
The **Get-EflowVmFeature** command returns the status of the enablement of IoT E
For more information, use the command `Get-Help Get-EflowVmFeature -full`. - ## Get-EflowVmName The **Get-EflowVmName** command returns the virtual machine's current hostname. This command exists to account for the fact that the Windows hostname can change over time.
The **Provision-EflowVm** command adds the provisioning information for your IoT
| scopeId | The scope ID for an existing DPS instance. | Scope ID for provisioning an IoT Edge device (**DpsTPM**, **DpsX509**, or **DpsSymmetricKey**). | | symmKey | The primary key for an existing DPS enrollment or the primary key of an existing IoT Edge device registered using symmetric keys | Symmetric key for provisioning an IoT Edge device (**DpsSymmetricKey**). | | registrationId | The registration ID of an existing IoT Edge device | Registration ID for provisioning an IoT Edge device (**DpsSymmetricKey**). |
-| identityCertPath | Directory path; must be in a folder that can be owned by the `iotedge` service | Absolute destination path of the identity certificate on your virtual machine for provisioning an IoT Edge device (**ManualX509**, **DpsX509**). |
-| identityPrivKeyPath | Directory path | Absolute source path of the identity private key on your virtual machine for provisioning an IoT Edge device (**ManualX509**, **DpsX509**). |
+| identityCertPath | Directory path | Absolute destination path of the identity certificate on your Windows host machine (**ManualX509**, **DpsX509**). |
+| identityPrivKeyPath | Directory path | Absolute source path of the identity private key on your Windows host machine (**ManualX509**, **DpsX509**). |
For more information, use the command `Get-Help Provision-EflowVm -full`.
iot-fundamentals Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-glossary.md
Azure IoT solution accelerators package together multiple Azure services into so
In [IoT Hub](#iot-hub), [jobs](../iot-hub/iot-hub-devguide-jobs.md) let you schedule and track activities on a set of devices registered with your IoT hub. Activities include updating device twin [desired properties](#desired-properties), updating device twin [tags](#tags), and invoking [direct methods](#direct-method). IoT Hub also uses jobs to [import to and export](../iot-hub/iot-hub-devguide-identity-registry.md#import-and-export-device-identities) from the [identity registry](#identity-registry).
-In IoT Central, [jobs](../iot-central/core/howto-run-a-job.md) let you manage your connected devices in bulk by setting properties and calling commands. IoT Central jobs also let you update [cloud properties](#cloud-property) in bulk.
+In IoT Central, [jobs](../iot-central/core/howto-manage-devices-in-bulk.md) let you manage your connected devices in bulk by setting properties and calling commands. IoT Central jobs also let you update [cloud properties](#cloud-property) in bulk.
## L
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-provisioning.md
Follow these instructions to provision the Device Update agent on your IoT Linux
1. Install the IoT Identity Service and add the latest version to your IoT device. 1. Log onto the machine or IoT device. 1. Open a terminal window.
- 1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs/packaging.md#installing-and-configuring-the-package) on your IoT device using this command:
+ 1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs-dev/packaging.md#installing-and-configuring-the-package) on your IoT device using this command:
> [!Note] > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.
iot-hub Iot Hub Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-customer-managed-keys.md
Title: Azure IoT Hub data encryption at rest via customer-managed keys| Microsoft Docs
-description: Encryption of data at rest with customer-managed keys for IoT Hub
+ Title: Encryption of Azure IoT Hub data at rest using customer-managed keys| Microsoft Docs
+description: Encryption of Azure IoT Hub data at rest using customer-managed keys
- Previously updated : 06/17/2020 Last updated : 07/07/2021
-# Encryption of data at rest with customer-managed keys for IoT Hub
+# Encryption of Azure Iot Hub data at rest using customer-managed keys
-IoT Hub supports encryption of data at rest with customer-managed keys (CMK), also known as Bring your own key (BYOK). Azure IoT Hub provides encryption of data at rest and in-transit as it's written in our datacenters and decrypts it for you as you access it. By default, IoT Hub uses Microsoft-managed keys to encrypt the data at rest. With CMK, you can get another layer of encryption on top of default encryption and can choose to encrypt data at rest with a key encryption key, managed through your [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). This gives you the flexibility to create, rotate, disable, and revoke access controls. If BYOK is configured for your IoT Hub, we also provide double encryption, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
+IoT Hub supports encryption of data at rest using customer-managed keys (CMK), also known as Bring your own key (BYOK). Azure IoT Hub provides encryption of data at rest and in-transit as it's written in our datacenters; the data is encrypted when read and decrypted when written.
-This capability requires the creation of a new IoT Hub (basic or standard tier). To try this capability, contact us through [Microsoft support](https://azure.microsoft.com/support/create-ticket/). Share your company name and subscription ID when contacting Microsoft support.
+By default, IoT Hub uses Microsoft-managed keys to encrypt the data. With CMK, you can get another layer of encryption on top of default encryption and can choose to encrypt data at rest with a key encryption key, managed through your [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). This gives you the flexibility to create, rotate, disable, and revoke access controls. If BYOK is configured for your IoT Hub, we also provide double encryption, which offers a second layer of protection, while still allowing you to control the encryption key through your Azure Key Vault.
+This capability requires the creation of a new IoT Hub (basic or standard tier). To try this capability, contact us through [Microsoft support](https://azure.microsoft.com/support/create-ticket/). Share your company name and subscription ID when contacting Microsoft support.
## Next steps
-* [Learn more about IoT Hub](./about-iot-hub.md)
+* [What is IoT Hub?](./about-iot-hub.md)
-* [Learn more about Azure Key Vault](../key-vault/general/overview.md)
+* [Learn more about Azure Key Vault](../key-vault/general/overview.md)
iot-hub Iot Hub Device Management Iot Extension Azure Cli 2 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-management-iot-extension-azure-cli-2-0.md
keywords: azure iot device management, azure iot hub device management, device m
Last updated 01/16/2018
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
Last updated 05/31/2019
iot-hub Iot Hub Monitoring Notifications With Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md
keywords: iot monitoring, iot notifications, iot temperature monitoring
Last updated 07/18/2019 #I think this is out of date. I changed 'click' to select. --RobinShahan
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
Previously updated : 03/22/2021 Last updated : 07/07/2021 # Managing public network access for your IoT hub
-To restrict access to only [private endpoint for your IoT hub in your VNet](virtual-network-support.md), disable public network access. To do so, use the Azure portal or the `publicNetworkAccess` API. You can also allow public access by using the portal or the `publicNetworkAccess` API.
+To restrict access to only [a private endpoint for an IoT hub in your VNet](virtual-network-support.md), disable public network access. To do so, use the Azure portal or the `publicNetworkAccess` API. You can also allow public access by using the portal or the `publicNetworkAccess` API.
-## Turn off public network access using Azure portal
+## Turn off public network access using the Azure portal
-1. Visit the [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com)
2. Navigate to your IoT hub. Go to **Resource Groups**, choose the appropriate group, and select your IoT Hub. 3. Select **Networking** from the left-side menu. 4. Under ΓÇ£Allow public network access toΓÇ¥, select **Disabled**
To restrict access to only [private endpoint for your IoT hub in your VNet](virt
To turn on public network access, selected **All networks**, then **Save**.
-### Accessing the IoT Hub after disabling public network access
+### Accessing the IoT Hub after disabling the public network access
-After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through Azure portal, because API calls to the IoT Hub service are made directly using your browser with your credentials.
+After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through the Azure portal, because API calls to the IoT Hub service are made directly using your browser with your credentials.
### IoT Hub endpoint, IP address, and ports after disabling public network access
-IoT Hub is a multi-tenant Platform-as-a-Service (PaaS), so different customers share the same pool of compute, networking, and storage hardware resources. IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it.
+IoT Hub is a multi-tenant Platform-as-a-Service (PaaS), so different customers share the same pool of compute, networking, and storage hardware resources. IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in wide-area networks and on-premises networks can all access it.
Disabling public network access is enforced on a specific IoT hub resource, ensuring isolation. To keep the service active for other customer resources using the public path, its public endpoint remains resolvable, IP addresses discoverable, and ports remain open. This is not a cause for concern as Microsoft integrates multiple layers of security to ensure complete isolation between tenants. To learn more, see [Isolation in the Azure Public Cloud](../security/fundamentals/isolation-choices.md#tenant-level-isolation).
There is a bug with IoT Hub where the [built-in Event Hub compatible endpoint](i
## Turn on network access using Azure portal
-1. Visit the [Azure portal](https://portal.azure.com)
+1. Go to the [Azure portal](https://portal.azure.com).
2. Navigate to your IoT hub. Go to **Resource Groups**, choose the appropriate group, and select your hub. 3. Select **Networking** from the left-side menu. 4. Under ΓÇ£Allow public network access toΓÇ¥, select **Selected IP Ranges**. 5. In the **IP Filter** dialog that opens, select **Add your client IP address** and enter a name and an address range. 6. Select **Save**. If the button is greyed out, make sure your client IP address is already added as an IP filter. ### Turn on all network ranges
If you have trouble accessing your IoT hub, your network configuration could be
Unable to retrieve devices. Please ensure that your network connection is online and network settings allow connections from your IP address. ```
-To get access to the IoT hub, request permission from your IT administrator to add your IP address in the IP address range or to enable public network access to all networks. If that fails to resolve the issue, check your local network settings or contact your local network administrator to fix connectivity to IoT Hub. For example, sometimes a proxy in the local network can interfere with access to IoT Hub.
+To get access to the IoT hub, request permission from your IT administrator to add your IP address in the IP address range or to enable public network access to all networks. If that fails to resolve the issue, check your local network settings or contact your local network administrator to fix connectivity to the IoT Hub. For example, sometimes a proxy in the local network can interfere with access to IoT Hub.
If the preceding commands do not work or you cannot turn on all network ranges, contact Microsoft support.
iot-hub Iot Hub Troubleshoot Error 404001 Devicenotfound https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-troubleshoot-error-404001-devicenotfound.md
Title: Troubleshooting Azure IoT Hub error 404001 DeviceNotFound description: Understand how to fix error 404001 DeviceNotFound - Previously updated : 01/30/2020 Last updated : 07/07/2021 #Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 404001 DeviceNotFound errors.
iot-hub Iot Hub Troubleshoot Error 409002 Linkcreationconflict https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-troubleshoot-error-409002-linkcreationconflict.md
Title: Troubleshooting Azure IoT Hub error 409002 LinkCreationConflict description: Understand how to fix error 409002 LinkCreationConflict - Previously updated : 01/30/2020 Last updated : 07/07/2021 #Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 409002 LinkCreationConflict errors.
iot-hub Iot Hub Vscode Iot Toolkit Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-vscode-iot-toolkit-cloud-device-messaging.md
Last updated 01/18/2019
iot-hub Iot Hub Weather Forecast Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-weather-forecast-machine-learning.md
keywords: weather forecast machine learning
Last updated 09/16/2020
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation-dual.md
Add secret to key vault with expiration date set to tomorrow, validity period fo
# [Azure CLI](#tab/azure-cli) ```azurecli
-$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyy-MM-ddTHH:mm:ssZ')
+$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyyy-MM-ddTHH:mm:ssZ')
az keyvault secret set --name storageKey2 --vault-name vaultrotation-kv --value <key2Value> --tags "CredentialId=key2" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate ``` # [Azure PowerShell](#tab/azurepowershell) ```azurepowershell
-$tomorrowDate = (get-date).AddDays(+1).ToString("yyy-MM-ddTHH:mm:ssZ")
+$tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force $tags = @{ CredentialId='key2';
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multivip-overview.md
The complete mapping in Azure Load Balancer is now as follows:
Each rule must produce a flow with a unique combination of destination IP address and destination port. By varying the destination port of the flow, multiple rules can deliver flows to the same DIP on different ports.
-Health probes are always directed to the DIP of a VM. You must ensure you that your probe reflects the health of the VM.
+Health probes are always directed to the DIP of a VM. You must ensure that your probe reflects the health of the VM.
## Rule type #2: backend port reuse by using Floating IP
The Floating IP rule type is the foundation of several load balancer configurati
## Next steps -- Review [Outbound connections](load-balancer-outbound-connections.md) to understand the impact of multiple frontends on outbound connection behavior.
+- Review [Outbound connections](load-balancer-outbound-connections.md) to understand the impact of multiple frontends on outbound connection behavior.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-model-management-and-deployment.md
Previously updated : 03/17/2020 Last updated : 07/08/2021
machine-learning How To Create Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-labeling-projects.md
Learn how to create and run projects to label images or label text data in Azure
> [!Important] > Data images or text must be available in an Azure blob datastore. (If you do not have an existing datastore, you may upload files during project creation.)
-Image data can be files with any of these types: ".jpg", ".jpeg", ".png", ".jpe", ".jfif", ".bmp", ".tif", ".tiff". Each file is an item to be labeled.
+Image data can be files with any of these types: ".jpg", ".jpeg", ".png", ".jpe", ".jfif", ".bmp", ".tif", ".tiff", ".dcm", ".dicom". Each file is an item to be labeled.
+
Text data can be either ".txt" or ".csv" files. * For ".txt" files, each file represents one item to be labeled.
To create a project, select **Add project**. Give the project an appropriate nam
* Choose **Object Identification (Bounding Box)** for projects when you want to assign a label and a bounding box to each object within an image. * Choose **Instance Segmentation (Polygon)** for projects when you want to assign a label and draw a polygon around each object within an image.
-
* Select **Next** when you're ready to continue. ### Text labeling project (preview)
For bounding boxes, important questions include:
## Use ML-assisted data labeling
-The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. It is only available for image labeling.
+The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. It is only available for image labeling. Medical images (".dcm") are not included in assisted labeling.
At the beginning of your labeling project, the items are shuffled into a random order to reduce potential bias. However, any biases that are present in the dataset will be reflected in the trained model. For example, if 80% of your items are of a single class, then approximately 80% of the data used to train the model will be of that class. This training does not include active learning.
The exact number of labeled data necessary to start assisted labeling is not a f
Since the final labels still rely on input from the labeler, this technology is sometimes called *human in the loop* labeling. > [!NOTE]
-> ML assisted data labelling does not support default storage accounts secured behind a [virtual network](how-to-network-security-overview.md). You must use a non-default storage account for ML assisted data labelling. The non-default storage account can be secured behind the virtual network.
+> ML assisted data labeling does not support default storage accounts secured behind a [virtual network](how-to-network-security-overview.md). You must use a non-default storage account for ML assisted data labelling. The non-default storage account can be secured behind the virtual network.
### Clustering
machine-learning How To Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-data-collection.md
To enable data collection, you need to:
data = np.array(data) result = model.predict(data) inputs_dc.collect(data) #this call is saving our input data into Azure Blob
- prediction_dc.collect(result) #this call is saving our input data into Azure Blob
+ prediction_dc.collect(result) #this call is saving our prediction data into Azure Blob
``` 1. Data collection is *not* automatically set to **true** when you deploy a service in AKS. Update your configuration file, as in the following example:
You can choose a tool of your preference to analyze the data collected in your B
## Next steps
-[Detect data drift](how-to-monitor-datasets.md) on the data you have collected.
+[Detect data drift](how-to-monitor-datasets.md) on the data you have collected.
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-label-data.md
Azure enables the **Submit** button when you've tagged all the images on the pag
After you submit tags for the data at hand, Azure refreshes the page with a new set of images from the work queue.
+## Medical image tasks
+
+Image projects support DICOM image format for X-ray file images. These images can be used to train machine learning models for clinical use.
++
+> [!IMPORTANT]
+> The capability to label DICOM or similar image types is not intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability is not designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Data Labeling for DICOM or similar image types.
+
+While you label the medical images with the same tools as any other images, there is an additional tool for DICOM images. Select the **Window and level** tool to change the intensity of the image. This tool is available only for DICOM images.
++ ## Tag images for multi-class classification If your project is of type "Image Classification Multi-Class," you'll assign a single tag to the entire image. To review the directions at any time, go to the **Instructions** page and select **View detailed instructions**.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
To use either a [managed Azure Machine Learning __compute target__](concept-comp
> * Virtual network service endpoint policies do not work for compute cluster/instance system storage accounts > * If storage and compute instance are in different regions you might see intermittent timeouts
-
-> [!TIP]
-> The Machine Learning compute instance or cluster automatically allocates additional networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
->
-> * One network security group
-> * One public IP address. If you have Azure policy prohibiting Public IP creation then deployment of cluster/instances will fail
-> * One load balancer
->
-> In the case of clusters these resources are deleted (and recreated) every time the cluster scales down to 0 nodes, however for an instance the resources are held onto till the instance is completely deleted (stopping does not remove the resources).
+### Dynamically allocated resources
+
+The Machine Learning compute instance or cluster automatically allocates additional networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
+
+* One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+
+ * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
+ * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
+
+ The following screenshot shows an example of these rules:
+
+ :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
+
+* One public IP address. If you have Azure policy prohibiting Public IP creation then deployment of cluster/instances will fail
+* One load balancer
+
+For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
+
+For a compute instance, these resources are kept until the instance is deleted. Stopping the instance does not remove the resources.
+
+> [!IMPORTANT]
> These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure policy which prohibits creation of network security groups. ### Create a compute cluster in a virtual network
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-environments.md
Previously updated : 07/23/2020 Last updated : 07/08/2021
This [example notebook](https://github.com/Azure/MachineLearningNotebooks/tree/m
## Create and manage environments with the Azure CLI + The [Azure Machine Learning CLI](reference-azure-machine-learning-cli.md) mirrors most of the functionality of the Python SDK. You can use it to create and manage environments. The commands that we discuss in this section demonstrate fundamental functionality. The following command scaffolds the files for a default environment definition in the specified directory. These files are JSON files. They work like the corresponding class in the SDK. You can use the files to create new environments that have custom settings.
Download a registered environment by using the following command.
az ml environment download -n myenv -d downloaddir ```
+## Create and manage environments with Visual Studio Code
+
+Using the Azure Machine Learning extension, you can create and manage environments in Visual Studio Code. For more information, see [manage Azure Machine Learning resources with the VS Code extension](how-to-manage-resources-vscode.md#environments).
+ ## Next steps * To use a managed compute target to train a model, see [Tutorial: Train a model](tutorial-train-models-with-aml.md).
managed-instance-apache-cassandra Dual Write Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/dual-write-proxy-migration.md
DFfromSourceCassandra
> [!NOTE] > In the preceding Scala sample, you'll notice that `timestamp` is being set to the current time before reading all the data in the source table. Then, `writetime` is being set to this backdated time stamp. This ensures that records that are written from the historical data load to the target endpoint can't overwrite updates that come in with a later time stamp from the dual-write proxy while historical data is being read. >
-> If you need to preserve *exact* time stamps for any reason, you should take a historical data migration approach that preserves time stamps, such as [this sample](https://github.com/scylladb/scylla-migrator).
+> If you need to preserve *exact* time stamps for any reason, you should take a historical data migration approach that preserves time stamps, such as [this sample](https://github.com/Azure-Samples/cassandra-migrator).
## Validate the source and target
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/customer-dashboard.md
This article provides information on the Customers dashboard in Partner Center. This dashboard displays information about your customers, including growth trends, presented in a graphical and downloadable format.
-To access the Customers dashboard in Partner Center, under **Commercial Marketplace** select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Customers**.
- >[!NOTE] > For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
The [Customers dashboard](https://go.microsoft.com/fwlink/?linkid=2166011) displ
The following sections describe how to use the Customers dashboard and how to read the data.
+To access the Customers dashboard in Partner Center, under **Commercial Marketplace** select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Customers**.
+ ### Month range You can find a month range selection at the top-right corner of each page. Customize the output of the **Customers** page graphs by selecting a month range based on the past 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
marketplace Downloads Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/downloads-dashboard.md
This article provides information on the Downloads dashboard in Partner Center. This dashboard displays a list of your download requests over the last 30 days.
-To access the Downloads dashboard, open the **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** dashboard under the commercial marketplace.
- >[!NOTE] > For detailed definitions of analytics terminology, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.md).
You will receive a pop-up notification containing a link to the **Downloads** da
## Lifetime export of commercial marketplace Analytics reports
+To access the Downloads dashboard, open the **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** dashboard under the commercial marketplace.
+ On the Downloads page, end user can do the following: - Lifetime export of commercial marketplace Analytics reports in csv and tsv format.
marketplace Insights Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/insights-dashboard.md
This article provides information on the Marketplace Insights dashboard in Partner Center. This dashboard displays a summary of commercial marketplace web analytics that enables publishers to measure customer engagement for their respective product detail pages listed in the commercial marketplace online stores: Microsoft AppSource and Azure Marketplace.
-To access the **Marketplace Insights** dashboard in Partner Center, under Commercial Marketplace, select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Marketplace Insights**.
- For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md). ## Marketplace Insights dashboard
The Marketplace Insights dashboard provides clickstream data, which shouldn't be
The Marketplace Insights dashboard displays web telemetry details for Azure Marketplace and AppSource in two separate tabs. The following sections describe how to use the Marketplace Insights dashboard and how to read the data.
+To access the **Marketplace Insights** dashboard in Partner Center, under Commercial Marketplace, select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Marketplace Insights**.
+ ### Month range You can find a month range selection at the top-right corner of each page. Customize the output of the **Marketplace Insights** page graphs by selecting a month range based on the past 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
marketplace Marketplace Criteria Content Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-criteria-content-validation.md
This article explains the requirements and guidelines for listing new offers and
| No. | Listing element | Base requirement | Optimal requirement | |: |: |: |: |
-| 1 | Lead destination | Has a lead destination configured.| The One Commercial Partner (OCP) Catalog has the lead destination CRM information that's also listed in the partner solution tab. |
-| 2 | Offer title | Briefly describes the solution offering. Matches the online promotion of the solution on the partner's website. | Contains key search words. |
-| 3 | Logo | The logo displays correctly. | The logo displays correctly. |
-| 4 | Offer description | <ul><li> Contains 2-3 paragraphs.</li><li>Solution offering is easily understood at a glance.</li><li>Is free of spelling and grammar mistakes.</li><li>Is comprehensive and captures target audience, type of user, and why it's valuable (value proposition).</li><li>Is in paragraph narrative form with short sentences that are easy to understand.</li></ul> | <ul><li> The target industry is outlined (if relevant).</li><li>Good style formatting, with each paragraph heading having a single sentence or phrase summarizing the content that follows and using bullet points, when appropriate, to emphasize key benefits. The objective is for the reader to understand the offering at a glance in an easy-to-view format and not have to read long paragraphs.</li><li>There is spacing between each paragraph. It reads like a car brochure. That is, it is comprehensive and describes the offering simply, without technical jargon. |
-| 5 | Categories and industries | <ul><li>Categories and subcategories must match offer capabilities </li><li>Do not select categories/subcategories that do not fit with your offer capabilities. </li></ul> | <ul><li>Select up to two categories, including a primary and a secondary category (optional).</li><li>Select up to two subcategories for each primary and/or secondary category. If no subcategory is selected, your offer will still be discoverable on the selected category.</li></ul> |
-| 6 | Images | <ul><li>Image requirements are listed in Partner Center.</li><li>Text included in the screenshot is legible, and the image is clear. | The solution offering is easily understood at a glance. |
-| 7 | Videos | <ul><li>No video is required but, if provided, it must play back without any errors.</li><li>If provided, it may not refer to competitor companies *unless* it is demonstrating a migration solution. |<ul><li>Ideally, the length is 3 min. or more.</li><li>The solution offer is easily understood through video content.</li><li>Provides demo of solution capabilities. |
-| 8 | List status (listing options) | <ul><li>Must be labeled as one of the following types: <ul><li>*Contact Me*</li><li>*Trial*/*Get Trial Now*/*Start Trial*/*Test Drive*</li><li>*Buy Now*/*Get It Now*</li></ul></ul> | Customer can readily understand what the next steps are: <ol><li>Try the Trial.</li><li>Buy Now.</li><li>Contact via email or phone number to arrange for Proof of Concept (POC), Assessment, or Briefing.</li></ol> |
-| 9 | Solution pricing | Must have solution pricing tab/details, and pricing must be in the local currency of the partner solution offering. | Multiple billing options should be available with tier pricing to give customer options. |
-| 10 | Learn more | Links at the bottom (under the description, not the Azure Marketplace links on the left) lead to more information about the solution and are publicly available and displaying correctly. | Links to specific items (for example, spec pages on the partner site) and not just the partner home page. |
-| 11 | Solution support and help | Link to at least one of the following: <ul><li>Telephone numbers</li><li>Email support</li><li>Chat agents</li><li>Community forums |<ul><li>All support methods are listed.</li><li>Paid support is offered free during the *Trial* or *Test Drive* period. |
-| 12 | Legal | Policies or terms are available via a public URL. | |
+| 1 | Offer title | Briefly describes the solution offering. Matches the online promotion of the solution on the partner's website. | Contains key search words. |
+| 2 | Logo | The logo displays correctly. | The logo displays correctly. |
+| 3 | Offer description | <ul><li> Contains 2-3 paragraphs.</li><li>Solution offering is easily understood at a glance.</li><li>Is free of spelling and grammar mistakes.</li><li>Is comprehensive and captures target audience, type of user, and why it's valuable (value proposition).</li><li>Is in paragraph narrative form with short sentences that are easy to understand.</li></ul> | <ul><li> The target industry is outlined (if relevant).</li><li>Good style formatting, with each paragraph heading having a single sentence or phrase summarizing the content that follows and using bullet points, when appropriate, to emphasize key benefits. The objective is for the reader to understand the offering at a glance in an easy-to-view format and not have to read long paragraphs.</li><li>There is spacing between each paragraph. It reads like a car brochure. That is, it is comprehensive and describes the offering simply, without technical jargon. |
+| 4 | Categories and industries | <ul><li>Categories and subcategories must match offer capabilities </li><li>Do not select categories/subcategories that do not fit with your offer capabilities. </li></ul> | <ul><li>Select up to two categories, including a primary and a secondary category (optional).</li><li>Select up to two subcategories for each primary and/or secondary category. If no subcategory is selected, your offer will still be discoverable on the selected category.</li></ul> |
+| 5 | Images | <ul><li>Image requirements are listed in Partner Center.</li><li>Text included in the screenshot is legible, and the image is clear. | The solution offering is easily understood at a glance. |
+| 6 | Videos | <ul><li>No video is required but, if provided, it must play back without any errors.</li><li>If provided, it may not refer to competitor companies *unless* it is demonstrating a migration solution. |<ul><li>Ideally, the length is 3 min. or more.</li><li>The solution offer is easily understood through video content.</li><li>Provides demo of solution capabilities. |
+| 7 | List status (listing options) | <ul><li>Must be labeled as one of the following types: <ul><li>*Contact Me*</li><li>*Trial*/*Get Trial Now*/*Start Trial*/*Test Drive*</li><li>*Buy Now*/*Get It Now*</li></ul></ul> | Customer can readily understand what the next steps are: <ol><li>Try the Trial.</li><li>Buy Now.</li><li>Contact via email or phone number to arrange for Proof of Concept (POC), Assessment, or Briefing.</li></ol> |
+| 8 | Solution pricing | Must have solution pricing tab/details, and pricing must be in the local currency of the partner solution offering. | Multiple billing options should be available with tier pricing to give customer options. |
+| 9 | Learn more | Links at the bottom (under the description, not the Azure Marketplace links on the left) lead to more information about the solution and are publicly available and displaying correctly. | Links to specific items (for example, spec pages on the partner site) and not just the partner home page. |
+| 10 | Solution support and help | Link to at least one of the following: <ul><li>Telephone numbers</li><li>Email support</li><li>Chat agents</li><li>Community forums |<ul><li>All support methods are listed.</li><li>Paid support is offered free during the *Trial* or *Test Drive* period. |
+| 11 | Legal | Policies or terms are available via a public URL. | |
||| ## Trial offer requirements
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/orders-dashboard.md
This article provides information on the Orders dashboard in Partner Center. This dashboard displays information about your orders, including growth trends, presented in a graphical and downloadable format.
-To access the Orders dashboard in the Partner Center, under **Commercial Marketplace**, select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Orders**.
- >[!NOTE] > For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
The [Orders dashboard](https://go.microsoft.com/fwlink/?linkid=2165914) displays
The following sections describe how to use the Orders dashboard and how to read the data.
+To access the Orders dashboard in the Partner Center, under **Commercial Marketplace**, select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Orders**.
+ ### Month range You can find a month range selection at the top-right corner of each page. Customize the output of the **Orders** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/summary-dashboard.md
This article provides information on the Summary dashboard in Partner Center. This dashboard displays graphs, trends, and values of aggregate data that summarize marketplace activity for your offers.
-To access the Summary dashboard in Partner Center, under **Commercial Marketplace** select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Summary**.
- >[!NOTE] > For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
The [Summary dashboard](https://go.microsoft.com/fwlink/?linkid=2165765) present
The following sections describe how to use the summary dashboard and how to read the data.
+To access the Summary dashboard in Partner Center, under **Commercial Marketplace** select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Summary**.
+ ### Month range You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past 3, 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/usage-dashboard.md
This article provides information on the Usage dashboard in Partner Center. This dashboard displays all virtual machine (VM) offers normalized usage, raw usage, and metered billing metrics in three separate tabs: VM Normalized usage, VM Raw usage, and metered billing usage.
-To access the Usage dashboard in Partner Center, under **Commercial Marketplace**, select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Usage**.
- >[!NOTE] > For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.md).
The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays
The following sections describe how to use the Usage dashboard and how to read the data.
+To access the Usage dashboard in Partner Center, under **Commercial Marketplace**, select **[Analyze](https://partner.microsoft.com/dashboard/commercial-marketplace/analytics/summary)** > **Usage**.
+ ### Month range You can find a month range selection at the top-right corner of each page. Customize the output of the **Usage** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range (computation period) is six months.
migrate How To Discover Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-discover-applications.md
The software inventory is exported and downloaded in Excel format. The **Softwar
- Software inventory also identifies the SQL Server instances running in your VMware environment. - If you have not provided Windows authentication or SQL Server authentication credentials on the appliance configuration manager, then add the credentials so that the appliance can use them to connect to respective SQL Server instances.
+ > [!NOTE]
+ > Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+ Once connected, appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data are captured every 30 seconds. Hence any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal. ## Next steps
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware.md
Support | Details
## SQL Server instance and database discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials that are provided in the appliance configuration manager. After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. SQL Server configuration data is updated once every 24 hours. Performance data is captured every 30 seconds.
+[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials that are provided in the appliance configuration manager. Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+
+After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. SQL Server configuration data is updated once every 24 hours. Performance data is captured every 30 seconds.
Support | Details |
Support | Details
Support | Details | **Supported servers** | Currently supported only for servers in your VMware environment.
-**Windows servers** | Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit)
+**Windows servers** | Windows Server 2019<br />Windows Server 2016<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit)
**Linux servers** | Red Hat Enterprise Linux 7, 6, 5<br /> Ubuntu Linux 16.04, 14.04<br /> Debian 8, 7<br /> Oracle Linux 7, 6<br /> CentOS 7, 6, 5<br /> SUSE Linux Enterprise Server 11 and later **Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed. **Discovery method** | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server. The appliance gathers the information from the server by using vSphere APIs. No agent is installed on the server, and the appliance doesnΓÇÖt connect directly to servers. WMI should be enabled and available on Windows servers.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-vmware.md
To start vCenter Server discovery, in **Step 3: Provide server credentials to pe
* It takes approximately 15 minutes for the inventory of discovered servers to appear in the Azure portal. * If you provided server credentials, software inventory (discovery of installed applications) is automatically initiated when the discovery of servers running vCenter Server is finished. Software inventory occurs once every 12 hours. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
* Discovery of installed applications might take longer than 15 minutes. The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable agentless dependency analysis. * SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-packet-capture-overview.md
Network Watcher variable packet capture allows you to create packet capture sessions to track traffic to and from a virtual machine. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more.
-Packet capture is a virtual machine extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine, which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob. There is a limit of 10 packet capture sessions per region per subscription. This limit applies only to the sessions and does not apply to the saved packet capture files either locally on the VM or in a storage account.
+Packet capture is a virtual machine extension that is remotely started through Network Watcher. This capability eases the burden of running a packet capture manually on the desired virtual machine, which saves valuable time. Packet capture can be triggered through the portal, PowerShell, CLI, or REST API. One example of how packet capture can be triggered is with Virtual Machine alerts. Filters are provided for the capture session to ensure you capture traffic you want to monitor. Filters are based on 5-tuple (protocol, local IP address, remote IP address, local port, and remote port) information. The captured data is stored in the local disk or a storage blob.
> [!IMPORTANT] > Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
To reduce the information you capture to only the information you want, the foll
|**Remote IP address** | This value filters the packet capture to packets where the remote IP matches this filter value.| |**Remote port** | This value filters the packet capture to packets where the remote port matches this filter value.| +
+## Considerations
+There is a limit of 10,000 parallel packet capture sessions per region per subscription. This limit applies only to the sessions and does not apply to the saved packet capture files either locally on the VM or in a storage account. See the [Network Watcher service limits page](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#network-watcher-limits) for a full list of limits.
+ ### Next steps Learn how you can manage packet captures through the portal by visiting [Manage packet capture in the Azure portal](network-watcher-packet-capture-manage-portal.md) or with PowerShell by visiting [Manage Packet Capture with PowerShell](network-watcher-packet-capture-manage-powershell.md).
Learn how you can manage packet captures through the portal by visiting [Manage
Learn how to create proactive packet captures based on virtual machine alerts by visiting [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md) <!--Image references-->
-[1]: ./media/network-watcher-packet-capture-overview/figure1.png
+[1]: ./media/network-watcher-packet-capture-overview/figure1.png
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-networking.md
Previously updated : 07/01/2021 Last updated : 07/08/2021 # Networking overview - Azure Database for PostgreSQL - Flexible Server
Here are some concepts to be familiar with when using virtual networks with Post
Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. See [network security group overview](../../virtual-network/network-security-groups-overview.md) documentation for more information. * **Private DNS zone integration** -
- Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked. If you use the Azure portal or the Azure CLI to create flexible servers, you can either provide a private DNS zone name that you had previously created in the same or in a different subscription, otherwise, a default private DNS zone is automatically created in your subscription. For a new Azure Database for PostgreSQL flexible server that uses private access with API, an Azure Resource Manager template (ARM template), or Terraform, create private DNS zones that end with `postgres.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md).
+ Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked.
+
+### Using Private DNS Zone
+
+* If you use the Azure portal or the Azure CLI to create flexible servers with VNET, a new private DNS zone is auto-provisioned per server in your subscription using the server name provided. Alternatively, if you want to setup your own private DNS zone to use with the flexible server, please see the [private DNS overview](../../dns/private-dns-overview.md) documentation.
+* If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, please create private DNS zones that end with `postgres.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md).
+
+ > [!IMPORTANT]
+ > Private DNS zone names must end with `postgres.database.azure.com`.
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
If you are using the custom DNS server then you must use a DNS forwarder to reso
### Private DNS zone and VNET peering
-Private DNS zone settings and VNET peering are independent of each other.
+Private DNS zone settings and VNET peering are independent of each other. Please refer to the [Using Private DNS Zone](concepts-networking.md#using-private-dns-zone) section above for more details on creating and using Private DNS zones.
-* By default, a new private DNS zone is auto-provisioned per server using the server name provided. However, if you want to setup your own private DNS zone to use with the flexible server, please see the [private DNS overview](../../dns/private-dns-overview.md) documentation.
-* If you want to connect to the flexible server from a client that is provisioned in another VNET, you have to link the private DNS zone with the VNET. See [how to link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation.
+If you want to connect to the flexible server from a client that is provisioned in another VNET from the same region or a different region, you have to link the private DNS zone with the VNET. See [how to link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation.
> [!NOTE] > Private DNS zone names that end with `postgres.database.azure.com` can only be linked.
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-django-aks-database.md
- Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server by using Azure CLI'
-description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server.
---- Previously updated : 12/10/2020---
-# Tutorial: Deploy Django app on AKS with Azure Database for PostgreSQL - Flexible Server
-
-In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL - Flexible Server (Preview) using the Azure CLI.
-
-**[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for PostgreSQL - Flexible Server (Preview)](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
-
-> [!NOTE]
-> - Azure Database for PostgreSQL Flexible Server is currently in public preview
-> - This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
-
-## Pre-requisites
--- Launch [Azure Cloud Shell](https://shell.azure.com) in new browser window. You can [install Azure CLI](/cli/azure/install-azure-cli#install) on you local machine too. If you're using a local install, login with Azure CLI by using the [az login](/cli/azure/reference-index#az_login) command. To finish the authentication process, follow the steps displayed in your terminal. -- Run [az version](/cli/azure/reference-index?#az_version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az_upgrade). This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Create a resource group
-
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group, *django-project* using the [az-group-create](/cli/azure/groupt#az_group_create) command in the *eastus* location.
-
-```azurecli-interactive
-az group create --name django-project --location eastus
-```
-
-> [!NOTE]
-> The location for the resource group is where resource group metadata is stored. It is also where your resources run in Azure if you don't specify another region during resource creation.
-
-The following example output shows the resource group created successfully:
-
-```json
-{
- "id": "/subscriptions/<guid>/resourceGroups/django-project",
- "location": "eastus",
- "managedBy": null,
-
- "name": "django-project",
- "properties": {
- "provisioningState": "Succeeded"
- },
- "tags": null
-}
-```
-
-## Create AKS cluster
-
-Use the [az aks create](/cli/azure/aks#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
-
-```azurecli-interactive
-az aks create --resource-group django-project --name djangoappcluster --node-count 1 --generate-ssh-keys
-```
-
-After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-> [!NOTE]
-> When creating an AKS cluster a second resource group is automatically created to store the AKS resources. See [Why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed.
-
-> [!NOTE]
-> If running Azure CLI locally , please run the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command to install `kubectl`.
-
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
-
-```azurecli-interactive
-az aks get-credentials --resource-group django-project --name djangoappcluster
-```
-
-To verify the connection to your cluster, use the [kubectl get]( https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes.
-
-```azurecli-interactive
-kubectl get nodes
-```
-
-The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
-
-```output
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
-```
-
-## Create an Azure Database for PostgreSQL - Flexible Server
-Create a flexible server with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az_postgres_flexible_server_create) command. The following command creates a server using service defaults and values from your Azure CLI's local context:
-
-```azurecli-interactive
-az postgres flexible-server create --public-access all
-```
-
-The server created has the below attributes:
-- A new empty database, ```postgres``` is created when the server is first provisioned. In this quickstart we will use this database.-- Autogenerated server name, admin username, admin password, resource group name (if not already specified in local context), and in the same location as your resource group-- Using public-access argument allow you to create a server with public access to any client with correct username and password.-- Since the command is using local context it will create the server in the resource group ```django-project``` and in the region ```eastus```.--
-## Build your Django docker image
-To use the sample docker image with this tutorial, please go to [**Create Kubernetes manifest file**](./tutorial-django-aks-database.md#create-kubernetes-manifest-file) section. If using an existing [Django application](https://docs.djangoproject.com/en/3.1/intro/), follow the steps described in this section.
-
-Here is sample django project folder structure:
-
-```
-ΓööΓöÇΓöÇΓöÇmy-djangoapp
- ΓööΓöÇΓöÇΓöÇviews.py
- ΓööΓöÇΓöÇΓöÇmodels.py
- ΓööΓöÇΓöÇΓöÇforms.py
- Γö£ΓöÇΓöÇΓöÇtemplates
- . . . . . . .
- Γö£ΓöÇΓöÇΓöÇstatic
- . . . . . . .
-ΓööΓöÇΓöÇΓöÇmy-django-project
- ΓööΓöÇΓöÇΓöÇsettings.py
- ΓööΓöÇΓöÇΓöÇurls.py
- ΓööΓöÇΓöÇΓöÇwsgi.py
- . . . . . . .
- ΓööΓöÇΓöÇΓöÇ Dockerfile
- ΓööΓöÇΓöÇΓöÇ requirements.txt
- ΓööΓöÇΓöÇΓöÇ manage.py
-
-```
-Update ```ALLOWED_HOSTS``` in ```settings.py``` to make sure the Django application uses the external IP that gets assigned to kubernetes app.
-
-```python
-ALLOWED_HOSTS = ['*']
-```
-
-Update ```DATABASES={ }``` section in the ```settings.py``` file. The code snippet below is reading the database host, username and password from the Kubernetes manifest file.
-
-```python
-DATABASES={
- 'default':{
- 'ENGINE':'django.db.backends.postgresql_psycopg2',
- 'NAME':os.getenv('DATABASE_NAME'),
- 'USER':os.getenv('DATABASE_USER'),
- 'PASSWORD':os.getenv('DATABASE_PASSWORD'),
- 'HOST':os.getenv('DATABASE_HOST'),
- 'PORT':'5432',
- 'OPTIONS': {'sslmode': 'require'}
- }
-}
-```
-
-### Generate a requirements.txt file
-Create a ```requirements.txt``` file to list out the dependencies for the Django Application. Here is an example ```requirements.txt``` file. You can use [``` pip freeze > requirements.txt```](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application.
-
-``` text
-Django==2.2.17
-postgres==3.0.0
-psycopg2-binary==2.8.6
-psycopg2-pool==1.1
-pytz==2020.4
-```
-
-### Create a Dockerfile
-Create a new file named ```Dockerfile``` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file.
-
-```docker
-# Use the official Python image from the Docker Hub
-FROM python:3.8.2
-
-# Make a new directory to put our code in.
-RUN mkdir /code
-
-# Change the working directory.
-WORKDIR /code
-
-# Copy to code folder
-COPY . /code/
-
-# Install the requirements.
-RUN pip install -r requirements.txt
-
-# Run the application:
-CMD python manage.py runserver 0.0.0.0:8000
-```
-
-### Build your image
-Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
-
-``` bash
-
-docker build --tag myblog:latest .
-
-```
-
-Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md).
-
-> [!IMPORTANT]
->If you are using Azure container regdistry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
->
->```azurecli-interactive
->az aks update -n myAKSCluster -g django-project --attach-acr <your-acr-name>
-> ```
->
-
-## Create Kubernetes manifest file
-
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named ```djangoapp.yaml``` and copy in the following YAML definition.
-
-### Update the sample manifest file
-- Replace ```[DOCKER-HUB-USER/ACR ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]``` with the demo sample app ```mksuni/django-aks-app:latest``` in the manifest file. You can see the code of this file [here](https://github.com/mksuni/django-aks-app).If using a different docker image for a custom application, please use provide the correct docker image name and tag. -- Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USER```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: django-app
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: django-app
- template:
- metadata:
- labels:
- app: django-app
- spec:
- containers:
- - name: django-app
- image: [DOCKER-HUB-USER-OR-ACR-ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]
- ports:
- - containerPort: 80
- env:
- - name: DATABASE_HOST
- value: "SERVERNAME.postgres.database.azure.com"
- - name: DATABASE_USER
- value: "YOUR-DATABASE-USER"
- - name: DATABASE_PASSWORD
- value: "YOUR-DATABASE-PASSWORD"
- - name: DATABASE_NAME
- value: "postgres"
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: "app"
- operator: In
- values:
- - django-app
- topologyKey: "kubernetes.io/hostname"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: python-svc
-spec:
- type: LoadBalancer
- ports:
- - port: 8000
- selector:
- app: django-app
-```
-
-## Deploy Django to AKS cluster
-Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f djangoapp.yaml
-```
-
-The following example output shows the Deployments and Services created successfully:
-
-```output
-deployment "django-app" created
-service "python-svc" created
-```
-
-A deployment ```django-app``` allows you to describes details on of your deployment such as which images to use for the app, the number of pods and pod configuration. A service ```python-svc``` is created to expose the application through an external IP.
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-
-To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument.
-
-```azurecli-interactive
-kubectl get service python-svc --watch
-```
-
-Initially the *EXTERNAL-IP* for the *django-app* service is shown as *pending*.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-django-app LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-
-```output
-django-app LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-Now open a web browser to the external IP address of your service view the Django application.
-
->[!NOTE]
-> - Currently the Django site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
-> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
-
-## Run database migrations
-
-For any django application, you would need to run database migration or collect static files. You can run these django shell commands using ```$ kubectl exec <pod-name> -- [COMMAND]```. Before running the command you need to find the pod name using ```kubectl get pods```.
-
-```bash
-$ kubectl get pods
-```
-
-You will see an output like this
-```output
-NAME READY STATUS RESTARTS AGE
-django-app-5d9cd6cd8-l6x4b 1/1 Running 0 2m
-```
-
-Once the pod name has been found you can run django database migrations with the command ```$ kubectl exec <pod-name> -- [COMMAND]```. Note ```/code/``` is the working directory for the project define in ```Dockerfile``` above.
-
-```bash
-$ kubectl exec django-app-5d9cd6cd8-l6x4b -- python /code/manage.py migrate
-```
-
-The output would look like
-```output
-Operations to perform:
- Apply all migrations: admin, auth, contenttypes, sessions
-Running migrations:
- Applying contenttypes.0001_initial... OK
- Applying auth.0001_initial... OK
- Applying admin.0001_initial... OK
- Applying admin.0002_logentry_remove_auto_add... OK
- Applying admin.0003_logentry_add_action_flag_choices... OK
- . . . . . .
-```
-
-If you run into issues, please run ```kubectl logs <pod-name>``` to see what exception is thrown by your application. If the application is working successfully you would see an output like this when running ```kubectl logs```.
-
-```output
-Watching for file changes with StatReloader
-Performing system checks...
-
-System check identified no issues (0 silenced).
-
-You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
-Run 'python manage.py migrate' to apply them.
-December 08, 2020 - 23:24:14
-Django version 2.2.17, using settings 'django_postgres_app.settings'
-Starting development server at http://0.0.0.0:8000/
-Quit the server with CONTROL-C.
-```
-
-## Clean up the resources
-
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, and all related resources.
-
-```azurecli-interactive
-az group delete --name django-project --yes --no-wait
-```
-
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
-
-## Next steps
--- Learn how to [access the Kubernetes web dashboard](../../aks/kubernetes-dashboard.md) for your AKS cluster-- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)-- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)-- Learn how to manage your [postgres flexible server](./quickstart-create-server-cli.md)-- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.+
+ Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server by using Azure CLI'
+description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server.
++++ Last updated : 12/10/2020+++
+# Tutorial: Deploy Django app on AKS with Azure Database for PostgreSQL - Flexible Server
+
+In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL - Flexible Server (Preview) using the Azure CLI.
+
+**[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for PostgreSQL - Flexible Server (Preview)](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
+
+> [!NOTE]
+> - Azure Database for PostgreSQL Flexible Server is currently in public preview
+> - This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
+
+## Pre-requisites
+
+- Launch [Azure Cloud Shell](https://shell.azure.com) in new browser window. You can [install Azure CLI](/cli/azure/install-azure-cli#install) on you local machine too. If you're using a local install, login with Azure CLI by using the [az login](/cli/azure/reference-index#az_login) command. To finish the authentication process, follow the steps displayed in your terminal.
+- Run [az version](/cli/azure/reference-index?#az_version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az_upgrade). This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
+
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group, *django-project* using the [az-group-create](/cli/azure/groupt#az_group_create) command in the *eastus* location.
+
+```azurecli-interactive
+az group create --name django-project --location eastus
+```
+
+> [!NOTE]
+> The location for the resource group is where resource group metadata is stored. It is also where your resources run in Azure if you don't specify another region during resource creation.
+
+The following example output shows the resource group created successfully:
+
+```json
+{
+ "id": "/subscriptions/<guid>/resourceGroups/django-project",
+ "location": "eastus",
+ "managedBy": null,
+
+ "name": "django-project",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null
+}
+```
+
+## Create AKS cluster
+
+Use the [az aks create](/cli/azure/aks#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
+
+```azurecli-interactive
+az aks create --resource-group django-project --name djangoappcluster --node-count 1 --generate-ssh-keys
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+> [!NOTE]
+> When creating an AKS cluster a second resource group is automatically created to store the AKS resources. See [Why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed.
+
+> [!NOTE]
+> If running Azure CLI locally , please run the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command to install `kubectl`.
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli-interactive
+az aks get-credentials --resource-group django-project --name djangoappcluster
+```
+
+To verify the connection to your cluster, use the [kubectl get]( https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes.
+
+```azurecli-interactive
+kubectl get nodes
+```
+
+The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
+
+```output
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
+```
+
+## Create an Azure Database for PostgreSQL - Flexible Server
+Create a flexible server with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az_postgres_flexible_server_create) command. The following command creates a server using service defaults and values from your Azure CLI's local context:
+
+```azurecli-interactive
+az postgres flexible-server create --public-access all
+```
+
+The server created has the below attributes:
+- A new empty database, ```postgres``` is created when the server is first provisioned. In this quickstart we will use this database.
+- Autogenerated server name, admin username, admin password, resource group name (if not already specified in local context), and in the same location as your resource group
+- Using public-access argument allow you to create a server with public access to any client with correct username and password.
+- Since the command is using local context it will create the server in the resource group ```django-project``` and in the region ```eastus```.
++
+## Build your Django docker image
+
+Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/) or use your existing Django project. Make sure your code is in this folder structure.
+
+> [!NOTE]
+> If you don't have an application you can go directly to [**Create Kubernetes manifest file**](./tutorial-django-aks-database.md#create-kubernetes-manifest-file) to use our sample image, [mksuni/django-aks-app:latest](https://hub.docker.com/r/mksuni/django-aks-app).
+
+```
+ΓööΓöÇΓöÇΓöÇmy-djangoapp
+ ΓööΓöÇΓöÇΓöÇviews.py
+ ΓööΓöÇΓöÇΓöÇmodels.py
+ ΓööΓöÇΓöÇΓöÇforms.py
+ Γö£ΓöÇΓöÇΓöÇtemplates
+ . . . . . . .
+ Γö£ΓöÇΓöÇΓöÇstatic
+ . . . . . . .
+ΓööΓöÇΓöÇΓöÇmy-django-project
+ ΓööΓöÇΓöÇΓöÇsettings.py
+ ΓööΓöÇΓöÇΓöÇurls.py
+ ΓööΓöÇΓöÇΓöÇwsgi.py
+ . . . . . . .
+ ΓööΓöÇΓöÇΓöÇ Dockerfile
+ ΓööΓöÇΓöÇΓöÇ requirements.txt
+ ΓööΓöÇΓöÇΓöÇ manage.py
+
+```
+Update ```ALLOWED_HOSTS``` in ```settings.py``` to make sure the Django application uses the external IP that gets assigned to kubernetes app.
+
+```python
+ALLOWED_HOSTS = ['*']
+```
+
+Update ```DATABASES={ }``` section in the ```settings.py``` file. The code snippet below is reading the database host, username and password from the Kubernetes manifest file.
+
+```python
+DATABASES={
+ 'default':{
+ 'ENGINE':'django.db.backends.postgresql_psycopg2',
+ 'NAME':os.getenv('DATABASE_NAME'),
+ 'USER':os.getenv('DATABASE_USER'),
+ 'PASSWORD':os.getenv('DATABASE_PASSWORD'),
+ 'HOST':os.getenv('DATABASE_HOST'),
+ 'PORT':'5432',
+ 'OPTIONS': {'sslmode': 'require'}
+ }
+}
+```
+
+### Generate a requirements.txt file
+Create a ```requirements.txt``` file to list out the dependencies for the Django Application. Here is an example ```requirements.txt``` file. You can use [``` pip freeze > requirements.txt```](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application.
+
+``` text
+Django==2.2.17
+postgres==3.0.0
+psycopg2-binary==2.8.6
+psycopg2-pool==1.1
+pytz==2020.4
+```
+
+### Create a Dockerfile
+Create a new file named ```Dockerfile``` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file.
+
+```docker
+# Use the official Python image from the Docker Hub
+FROM python:3.8.2
+
+# Make a new directory to put our code in.
+RUN mkdir /code
+
+# Change the working directory.
+WORKDIR /code
+
+# Copy to code folder
+COPY . /code/
+
+# Install the requirements.
+RUN pip install -r requirements.txt
+
+# Run the application:
+CMD python manage.py runserver 0.0.0.0:8000
+```
+
+### Build your image
+Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
+
+``` bash
+
+docker build --tag myblog:latest .
+
+```
+
+Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md).
+
+> [!IMPORTANT]
+>If you are using Azure container regdistry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
+>
+>```azurecli-interactive
+>az aks update -n myAKSCluster -g django-project --attach-acr <your-acr-name>
+> ```
+>
+
+## Create Kubernetes manifest file
+
+A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named ```djangoapp.yaml``` and copy in the following YAML definition.
+
+>[!IMPORTANT]
+> - Replace ```[DOCKER-HUB-USER/ACR ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]``` with your actual Django docker image name and tag, for example ```docker-hub-user/myblog:latest```. You can use the demo sample app ```mksuni/django-aks-app:latest``` in the manifest file.
+> - Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: django-app
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: django-app
+ template:
+ metadata:
+ labels:
+ app: django-app
+ spec:
+ containers:
+ - name: django-app
+ image: [DOCKER-HUB-USER-OR-ACR-ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]
+ ports:
+ - containerPort: 80
+ env:
+ - name: DATABASE_HOST
+ value: "SERVERNAME.postgres.database.azure.com"
+ - name: DATABASE_USERNAME
+ value: "YOUR-DATABASE-USERNAME"
+ - name: DATABASE_PASSWORD
+ value: "YOUR-DATABASE-PASSWORD"
+ - name: DATABASE_NAME
+ value: "postgres"
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: "app"
+ operator: In
+ values:
+ - django-app
+ topologyKey: "kubernetes.io/hostname"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: python-svc
+spec:
+ type: LoadBalancer
+ ports:
+ - port: 8000
+ selector:
+ app: django-app
+```
+
+## Deploy Django to AKS cluster
+Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest:
+
+```console
+kubectl apply -f djangoapp.yaml
+```
+
+The following example output shows the Deployments and Services created successfully:
+
+```output
+deployment "django-app" created
+service "python-svc" created
+```
+
+A deployment ```django-app``` allows you to describes details on of your deployment such as which images to use for the app, the number of pods and pod configuration. A service ```python-svc``` is created to expose the application through an external IP.
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument.
+
+```azurecli-interactive
+kubectl get service django-app --watch
+```
+
+Initially the *EXTERNAL-IP* for the *django-app* service is shown as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+django-app LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+django-app LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+Now open a web browser to the external IP address of your service view the Django application.
+
+>[!NOTE]
+> - Currently the Django site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
+> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
+
+## Run database migrations
+
+For any django application, you would need to run database migration or collect static files. You can run these django shell commands using ```$ kubectl exec <pod-name> -- [COMMAND]```. Before running the command you need to find the pod name using ```kubectl get pods```.
+
+```bash
+$ kubectl get pods
+```
+
+You will see an output like this
+```output
+NAME READY STATUS RESTARTS AGE
+django-app-5d9cd6cd8-l6x4b 1/1 Running 0 2m
+```
+
+Once the pod name has been found you can run django database migrations with the command ```$ kubectl exec <pod-name> -- [COMMAND]```. Note ```/code/``` is the working directory for the project define in ```Dockerfile``` above.
+
+```bash
+$ kubectl exec django-app-5d9cd6cd8-l6x4b -- python /code/manage.py migrate
+```
+
+The output would look like
+```output
+Operations to perform:
+ Apply all migrations: admin, auth, contenttypes, sessions
+Running migrations:
+ Applying contenttypes.0001_initial... OK
+ Applying auth.0001_initial... OK
+ Applying admin.0001_initial... OK
+ Applying admin.0002_logentry_remove_auto_add... OK
+ Applying admin.0003_logentry_add_action_flag_choices... OK
+ . . . . . .
+```
+
+If you run into issues, please run ```kubectl logs <pod-name>``` to see what exception is thrown by your application. If the application is working successfully you would see an output like this when running ```kubectl logs```.
+
+```output
+Watching for file changes with StatReloader
+Performing system checks...
+
+System check identified no issues (0 silenced).
+
+You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
+Run 'python manage.py migrate' to apply them.
+December 08, 2020 - 23:24:14
+Django version 2.2.17, using settings 'django_postgres_app.settings'
+Starting development server at http://0.0.0.0:8000/
+Quit the server with CONTROL-C.
+```
+
+## Clean up the resources
+
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, and all related resources.
+
+```azurecli-interactive
+az group delete --name django-project --yes --no-wait
+```
+
+> [!NOTE]
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+- Learn how to [access the Kubernetes web dashboard](../../aks/kubernetes-dashboard.md) for your AKS cluster
+- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)
+- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)
+- Learn how to manage your [postgres flexible server](./quickstart-create-server-cli.md)
+- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-sapecc-source.md
To create and run a new scan, do the following:
f. **Maximum memory available:** Maximum memory(in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of SAP ECC source to be scanned.
+ > [!Note]
+ > As a thumb rule, please provide 1GB memory for every 1000 tables
:::image type="content" source="media/register-scan-sapecc-source/scan-sapecc.png" alt-text="scan SAPECC" border="true":::
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-saps4hana-source.md
To create and run a new scan, do the following:
f. **Maximum memory available:** Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of SAP S/4HANA source to be scanned.
+ > [!Note]
+ > As a thumb rule, please provide 1GB memory for every 1000 tables
:::image type="content" source="media/register-scan-saps4hana-source/scan-saps-4-hana.png" alt-text="scan SAP S/4HANA" border="true":::
route-server Quickstart Configure Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/quickstart-configure-template.md
# Quickstart: Create an Azure Route Server using an ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM Template) to deploy an Azure Route Server into a new or existing virtual network.
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy an Azure Route Server into a new or existing virtual network.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
search Search Get Started Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-get-started-javascript.md
ms.devlang: javascript Previously updated : 06/11/2021 Last updated : 07/08/2021
Begin by opening VS Code and its [integrated terminal](https://code.visualstudio
"author": "Your Name", "license": "MIT", "dependencies": {
- "@azure/search-documents": "^11.0.3",
+ "@azure/search-documents": "^11.2.0",
"dotenv": "^8.2.0" } }
Add the following to **hotels_quickstart_index.json** or [download the file](htt
"filterable": false, "sortable": false, "facetable": false,
- "analyzer": "en.lucene"
+ "analyzerName": "en.lucene"
}, { "name": "Description_fr",
Add the following to **hotels_quickstart_index.json** or [download the file](htt
"filterable": false, "sortable": false, "facetable": false,
- "analyzer": "fr.lucene"
+ "analyzerName": "fr.lucene"
}, { "name": "Category",
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Azure Sentinel feature availability in
| - [Anomalous Windows File Share Access Detection](../../sentinel/fusion.md) | Public Preview | Not Available | | - [Anomalous RDP Login Detection](../../sentinel/connect-windows-security-events.md#configure-the-security-events--windows-security-events-connector-for-anomalous-rdp-login-detection)<br>Built-in ML detection | Public Preview | Not Available | | - [Anomalous SSH login detection](../../sentinel/connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection)<br>Built-in ML detection | Public Preview | Not Available |
-| **Azure service connectors** | | |
-| - [Azure Activity Logs](../../sentinel/connect-azure-activity.md) | GA | GA |
-| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA |
-| - [Azure ADIP](../../sentinel/connect-azure-ad-identity-protection.md) | GA | GA |
-| - [Azure DDoS Protection](../../sentinel/connect-azure-ddos-protection.md) | GA | GA |
-| - [Azure Defender](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Azure Defender for IoT](../../sentinel/connect-asc-iot.md) | GA | Not Available |
-| - [Azure Firewall ](../../sentinel/connect-azure-firewall.md) | GA | GA |
-| - [Azure Information Protection](../../sentinel/connect-azure-information-protection.md) | Public Preview | Not Available |
-| - [Azure Key Vault ](../../sentinel/connect-azure-key-vault.md) | Public Preview | Not Available |
-| - [Azure Kubernetes Services (AKS)](../../sentinel/connect-azure-kubernetes-service.md) | Public Preview | Not Available |
-| - [Azure SQL Databases](../../sentinel/connect-azure-sql-logs.md) | GA | GA |
-| - [Azure WAF](../../sentinel/connect-azure-waf.md) | GA | GA |
-| **Windows connectors** | | |
-| - [Windows Firewall](../../sentinel/connect-windows-firewall.md) | GA | GA |
-| - [Windows Security Events](../../sentinel/connect-windows-security-events.md) | GA | GA |
-| **External connectors**| | |
-| - [Agari Phishing Defense and Brand Protection](../../sentinel/connect-agari-phishing-defense.md) | Public Preview | Public Preview |
-| - [AI Analyst Darktrace](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
-| - [AI Vectra Detect](../../sentinel/connect-ai-vectra-detect.md) | Public Preview | Public Preview |
-| - [Akamai Security Events](../../sentinel/connect-akamai-security-events.md) | Public Preview | Public Preview |
-| - [Alcide kAudit](../../sentinel/connect-alcide-kaudit.md) | Public Preview | Not Available |
-| - [Alsid for Active Directory](../../sentinel/connect-alsid-active-directory.md) | Public Preview | Not Available |
-| - [Apache HTTP Server](../../sentinel/connect-apache-http-server.md) | Public Preview | Not Available |
-| - [Aruba ClearPass](../../sentinel/connect-aruba-clearpass.md) | Public Preview | Public Preview |
-| - [AWS](../../sentinel/connect-data-sources.md) | GA | GA |
-| - [Barracuda CloudGen Firewall](../../sentinel/connect-barracuda-cloudgen-firewall.md) | GA | GA |
-| - [Barracuda Web App Firewall](../../sentinel/connect-barracuda.md) | GA | GA |
-| - [BETTER Mobile Threat Defense MTD](../../sentinel/connect-better-mtd.md) | Public Preview | Not Available |
-| - [Beyond Security beSECURE](../../sentinel/connect-besecure.md) | Public Preview | Not Available |
-| - [Blackberry CylancePROTECT](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
-| - [Broadcom Symantec DLP](../../sentinel/connect-broadcom-symantec-dlp.md) | Public Preview | Public Preview |
-| - [Check Point](../../sentinel/connect-checkpoint.md) | GA | GA |
-| - [Cisco ASA](../../sentinel/connect-cisco.md) | GA | GA |
-| - [Cisco Meraki](../../sentinel/connect-cisco-meraki.md) | Public Preview | Public Preview |
-| - [Cisco Umbrella](../../sentinel/connect-cisco-umbrella.md) | Public Preview | Public Preview |
-| - [Cisco UCS](../../sentinel/connect-cisco-ucs.md) | Public Preview | Public Preview |
-| - [Cisco Firepower EStreamer](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
-| - [Citrix Analytics WAF](../../sentinel/connect-citrix-waf.md) | GA | GA |
-| - [Common Event Format (CEF)](../../sentinel/connect-common-event-format.md) | GA | GA |
+| **Azure service connectors** | | |
+| - [Azure Activity Logs](../../sentinel/connect-azure-activity.md) | GA | GA |
+| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA |
+| - [Azure ADIP](../../sentinel/connect-azure-ad-identity-protection.md) | GA | GA |
+| - [Azure DDoS Protection](../../sentinel/connect-azure-ddos-protection.md) | GA | GA |
+| - [Azure Defender](../../sentinel/connect-azure-security-center.md) | GA | GA |
+| - [Azure Defender for IoT](../../sentinel/connect-asc-iot.md) | GA | Not Available |
+| - [Azure Firewall ](../../sentinel/connect-azure-firewall.md) | GA | GA |
+| - [Azure Information Protection](../../sentinel/connect-azure-information-protection.md) | Public Preview | Not Available |
+| - [Azure Key Vault ](../../sentinel/connect-azure-key-vault.md) | Public Preview | Not Available |
+| - [Azure Kubernetes Services (AKS)](../../sentinel/connect-azure-kubernetes-service.md) | Public Preview | Not Available |
+| - [Azure SQL Databases](../../sentinel/connect-azure-sql-logs.md) | GA | GA |
+| - [Azure WAF](../../sentinel/connect-azure-waf.md) | GA | GA |
+| **Windows connectors** | | |
+| - [Windows Firewall](../../sentinel/connect-windows-firewall.md) | GA | GA |
+| - [Windows Security Events](../../sentinel/connect-windows-security-events.md) | GA | GA |
+| **External connectors** | | |
+| - [Agari Phishing Defense and Brand Protection](../../sentinel/connect-agari-phishing-defense.md) | Public Preview | Public Preview |
+| - [AI Analyst Darktrace](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
+| - [AI Vectra Detect](../../sentinel/connect-ai-vectra-detect.md) | Public Preview | Public Preview |
+| - [Akamai Security Events](../../sentinel/connect-akamai-security-events.md) | Public Preview | Public Preview |
+| - [Alcide kAudit](../../sentinel/connect-alcide-kaudit.md) | Public Preview | Not Available |
+| - [Alsid for Active Directory](../../sentinel/connect-alsid-active-directory.md) | Public Preview | Not Available |
+| - [Apache HTTP Server](../../sentinel/connect-apache-http-server.md) | Public Preview | Not Available |
+| - [Aruba ClearPass](../../sentinel/connect-aruba-clearpass.md) | Public Preview | Public Preview |
+| - [AWS](../../sentinel/connect-data-sources.md) | GA | GA |
+| - [Barracuda CloudGen Firewall](../../sentinel/connect-barracuda-cloudgen-firewall.md) | GA | GA |
+| - [Barracuda Web App Firewall](../../sentinel/connect-barracuda.md) | GA | GA |
+| - [BETTER Mobile Threat Defense MTD](../../sentinel/connect-better-mtd.md) | Public Preview | Not Available |
+| - [Beyond Security beSECURE](../../sentinel/connect-besecure.md) | Public Preview | Not Available |
+| - [Blackberry CylancePROTECT](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
+| - [Broadcom Symantec DLP](../../sentinel/connect-broadcom-symantec-dlp.md) | Public Preview | Public Preview |
+| - [Check Point](../../sentinel/connect-checkpoint.md) | GA | GA |
+| - [Cisco ASA](../../sentinel/connect-cisco.md) | GA | GA |
+| - [Cisco Meraki](../../sentinel/connect-cisco-meraki.md) | Public Preview | Public Preview |
+| - [Cisco Umbrella](../../sentinel/connect-cisco-umbrella.md) | Public Preview | Public Preview |
+| - [Cisco UCS](../../sentinel/connect-cisco-ucs.md) | Public Preview | Public Preview |
+| - [Cisco Firepower EStreamer](../../sentinel/connect-data-sources.md) | Public Preview | Public Preview |
+| - [Citrix Analytics WAF](../../sentinel/connect-citrix-waf.md) | GA | GA |
+| - [Common Event Format (CEF)](../../sentinel/connect-common-event-format.md) | GA | GA |
| - [CyberArk Enterprise Password Vault (EPV) Events](../../sentinel/connect-cyberark.md) | Public Preview | Public Preview | | - [ESET Enterprise Inspector](../../sentinel/connect-data-sources.md) | Public Preview | Not Available | | - [Eset Security Management Center](../../sentinel/connect-data-sources.md) | Public Preview | Not Available |
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office
> Make sure to pay attention to the Azure environment to understand where [interoperability is possible](#microsoft-365-integration). In the following table, interoperability that is *not* possible is marked with a dash (-) to indicate that support is not relevant. >
-| Connector | Azure | Azure Government |
-| | -- | - |
-| **[Dynamics365](../../sentinel/connect-dynamics-365.md)** | | |
-| - Office 365 GCC |Public Preview | -|
-| - Office 365 GCC High | -|Not Available |
-| - Office 365 DoD |- | Not Available|
-| **[Microsoft 365 Defender](../../sentinel/connect-microsoft-365-defender.md)** | | |
-| - Office 365 GCC | Public Preview| -|
-| - Office 365 GCC High |- |Not Available |
-| - Office 365 DoD |- | Not Available|
-| **[Microsoft Cloud App Security (MCAS)](../../sentinel/connect-cloud-app-security.md)** | | |
-| - Office 365 GCC | GA| -|
-| - Office 365 GCC High |-|GA |
-| - Office 365 DoD |- |GA |
-| **[Microsoft Cloud App Security (MCAS)](../../sentinel/connect-cloud-app-security.md)** <br>Shadow IT logs | | |
-| - Office 365 GCC | Public Preview| -|
-| - Office 365 GCC High |-|Public Preview |
-| - Office 365 DoD |- |Public Preview |
-| **[Microsoft Cloud App Security (MCAS)](../../sentinel/connect-cloud-app-security.md)** <br>Alerts | | |
-| - Office 365 GCC | Public Preview| -|
-| - Office 365 GCC High |-|Public Preview |
-| - Office 365 DoD |- |Public Preview |
-| **[Microsoft Defender for Endpoint](../../sentinel/connect-microsoft-defender-advanced-threat-protection.md)** | | |
-| - Office 365 GCC | GA|- |
-| - Office 365 GCC High |- |Not Available |
-| - Office 365 DoD |- | Not Available|
-| **[Microsoft Defender for Identity](../../sentinel/connect-azure-atp.md)** | | |
-| - Office 365 GCC |Public Preview | -|
-| - Office 365 GCC High |- | Not Available |
-| - Office 365 DoD |- |Not Available |
-| **[Microsoft Defender for Office 365](../../sentinel/connect-office-365-advanced-threat-protection.md)** | | |
-| - Office 365 GCC |Public Preview |- |
-| - Office 365 GCC High |- |Not Available |
-| - Office 365 DoD | -|Not Available |
-| **[Office 365](../../sentinel/connect-office-365.md)** | | |
-| - Office 365 GCC | GA|- |
-| - Office 365 GCC High |- |GA |
-| - Office 365 DoD |- |GA |
-| | |
-
+| Connector | Azure | Azure Government |
+|--|--|--|
+| **[Dynamics365](../../sentinel/connect-dynamics-365.md)** | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
+| **[Microsoft 365 Defender](../../sentinel/connect-microsoft-365-defender.md)** | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
+| **[Microsoft Cloud App Security (MCAS)](../../sentinel/connect-cloud-app-security.md)** | | |
+| - Office 365 GCC | GA | - |
+| - Office 365 GCC High | - | GA |
+| - Office 365 DoD | - | GA |
+| **[Microsoft Cloud App Security (MCAS)](../../sentinel/connect-cloud-app-security.md)** <br>Shadow IT logs | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Public Preview |
+| - Office 365 DoD | - | Public Preview |
+| **[Microsoft Cloud App Security (MCAS)](../../sentinel/connect-cloud-app-security.md)** <br>Alerts | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Public Preview |
+| - Office 365 DoD | - | Public Preview |
+| **[Microsoft Defender for Endpoint](../../sentinel/connect-microsoft-defender-advanced-threat-protection.md)** | | |
+| - Office 365 GCC | GA | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
+| **[Microsoft Defender for Identity](../../sentinel/connect-azure-atp.md)** | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
+| **[Microsoft Defender for Office 365](../../sentinel/connect-office-365-advanced-threat-protection.md)** | | |
+| - Office 365 GCC | Public Preview | - |
+| - Office 365 GCC High | - | Not Available |
+| - Office 365 DoD | - | Not Available |
+| **[Office 365](../../sentinel/connect-office-365.md)** | | |
+| - Office 365 GCC | GA | - |
+| - Office 365 GCC High | - | GA |
+| - Office 365 DoD | - | GA |
+| | |
+
+## Azure Defender for IoT
+
+Azure Defender for IoT lets you accelerate IoT/OT innovation with comprehensive security across all your IoT/OT devices. For end-user organizations, Azure Defender for IoT offers agentless, network-layer security that is rapidly deployed, works with diverse industrial equipment, and interoperates with Azure Sentinel and other SOC tools. Deploy on-premises or in Azure-connected environments. For IoT device builders, the Azure Defender for IoT security agents allow you to build security directly into your new IoT devices and Azure IoT projects. The micro agent has flexible deployment options, including the ability to deploy as a binary package or modify source code. And the micro agent is available for standard IoT operating systems like Linux and Azure RTOS. For more information, see the [Azure Defender for IoT product documentation](../../defender-for-iot/index.yml).
+
+The following table displays the current Azure Defender for IoT feature availability in Azure, and Azure Government.
+
+### For organizations
+
+| Feature | Azure | Azure Government |
+|--|--|--|
+| [On-premises device discovery and inventory](../../defender-for-iot/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) | GA | GA |
+| Cloud device discovery and inventory | Private Preview | Not Available |
+| [Vulnerability management](../../defender-for-iot/how-to-create-risk-assessment-reports.md) | GA | GA |
+| [Threats detection with IoT, and OT behavioral analytics](../../defender-for-iot/how-to-work-with-alerts-on-your-sensor.md) | GA | GA |
+| [Automatic Threat Intelligence Updates](../../defender-for-iot/how-to-work-with-threat-intelligence-packages.md) | GA | GA |
+| **Unify IT, and OT security with SIEM, SOAR and XDR** | | |
+| - [Forward alert information](../../defender-for-iot/how-to-forward-alert-information-to-partners.md) | GA | GA |
+| - [Configure Sentinel with Azure Defender for IoT](../../defender-for-iot/how-to-configure-with-sentinel.md) | GA | Not Available |
+| - [SOC systems](../../defender-for-iot/integration-splunk.md) | GA | GA |
+| - [Ticketing system and CMDB (Service Now)](../../defender-for-iot/integration-servicenow.md) | GA | GA |
+| - [Sensor Provisioning](../../defender-for-iot/how-to-manage-sensors-on-the-cloud.md) | GA | GA |
+
+### For device builders
+
+| Feature | Azure | Azure Government |
+|--|--|--|
+| [Micro agent for Azure RTOS](../../defender-for-iot/iot-security-azure-rtos.md) | GA | GA |
+| - [Configure Sentinel with Azure Defender for IoT](../../defender-for-iot/how-to-configure-with-sentinel.md) | GA | Not Available |
+| **Standalone micro agent for Linux** | | |
+| - [Standalone micro agent overview](../../defender-for-iot/concept-standalone-micro-agent-overview.md) | Public Preview | Public Preview |
+| - [Standalone agent binary installation](../../defender-for-iot/quickstart-standalone-agent-binary-installation.md) | Public Preview | Public Preview |
## Next steps
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/steps-secure-identity.md
Using the assume breach mentality, you should reduce the impact of compromised u
ItΓÇÖs important to understand the various [Azure AD application consent experiences](../../active-directory/develop/application-consent-experience.md), the [types of permissions and consent](../../active-directory/develop/v2-permissions-and-consent.md), and their implications on your organizationΓÇÖs security posture. By default, all users in Azure AD can grant applications that leverage the Microsoft identity platform to access your organizationΓÇÖs data. While allowing users to consent by themselves does allow users to easily acquire useful applications that integrate with Microsoft 365, Azure and other services, it can represent a risk if not used and monitored carefully.
-Microsoft recommends restricting user consent to help reduce your surface area and mitigate this risk. You may also use [app consent policies (preview)](../../active-directory/manage-apps/configure-user-consent.md) to restrict end-user consent to only verified publishers and only for permissions you select. If end-user consent is restricted, previous consent grants will still be honored but all future consent operations must be performed by an administrator. For restricted cases, admin consent can be requested by users through an integrated [admin consent request workflow](../../active-directory/manage-apps/configure-admin-consent-workflow.md) or through your own support processes. Before restricting end-user consent, use our [recommendations](../../active-directory/manage-apps/manage-consent-requests.md) to plan this change in your organization. For applications you wish to allow all users to access, consider [granting consent on behalf of all users](../../active-directory/develop/v2-admin-consent.md), making sure users who have not yet consented individually will be able to access the app. If you do not want these applications to be available to all users in all scenarios, use [application assignment](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) and Conditional Access to restrict user access to [specific apps](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md).
+Microsoft recommends [restricting user consent](../../active-directory/manage-apps/configure-user-consent.md) to allow end-user consent only for apps from verified publishers and only for permissions you select. If end-user consent is restricted, previous consent grants will still be honored but all future consent operations must be performed by an administrator. For restricted cases, admin consent can be requested by users through an integrated [admin consent request workflow](../../active-directory/manage-apps/configure-admin-consent-workflow.md) or through your own support processes. Before restricting end-user consent, use our [recommendations](../../active-directory/manage-apps/manage-consent-requests.md) to plan this change in your organization. For applications you wish to allow all users to access, consider [granting consent on behalf of all users](../../active-directory/develop/v2-admin-consent.md), making sure users who have not yet consented individually will be able to access the app. If you do not want these applications to be available to all users in all scenarios, use [application assignment](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) and Conditional Access to restrict user access to [specific apps](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md).
Make sure users can request admin approval for new applications to reduce user friction, minimize support volume, and prevent users from signing up for applications using non-Azure AD credentials. Once you regulate your consent operations, administrators should audit app and consented permissions on a regular basis.
sentinel Connect Azure Security Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-security-center.md
Title: Connect Azure Defender data to Azure Sentinel
+ Title: Connect Azure Defender alerts to Azure Sentinel
description: Learn how to connect Azure Defender alerts from Azure Security Center and stream them into Azure Sentinel.
ms.assetid: d28c2264-2dce-42e1-b096-b5a234ff858a
Previously updated : 09/07/2020 Last updated : 07/08/2021
-# Connect Azure Defender alert data from Azure Security Center
+# Connect Azure Defender alerts from Azure Security Center
-Use the Azure Defender alert connector to ingest Azure Defender alerts from [Azure Security Center](../security-center/security-center-introduction.md) and stream them into Azure Sentinel.
+## Background
+
+[Azure Defender](../security-center/azure-defender.md), the integrated cloud workload protection platform (CWPP) of [Azure Security Center](../security-center/security-center-introduction.md), is a security management tool that allows you to detect and quickly respond to threats across hybrid cloud workloads.
+
+This connector allows you to stream your Azure Defender security alerts from Azure Security Center into Azure Sentinel, so you can view, analyze, and respond to Defender alerts, and the incidents they generate, in a broader organizational threat context.
+
+As Azure Defender itself is enabled per subscription, the Azure Defender connector too is enabled or disabled separately for each subscription.
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+### Alert synchronization
+
+- When you connect Azure Defender to Azure Sentinel, the status of Azure Defender alerts that get ingested into Azure Sentinel is synchronized between the two services. So, for example, when an alert is closed in Azure Defender, that alert will display as closed in Azure Sentinel as well.
+
+- Changing the status of an alert in Azure Defender will *not* affect the status of any Azure Sentinel **incidents** that contain the Azure Sentinel alert, only that of the alert itself.
+
+### Bi-directional alert synchronization
+
+> [!IMPORTANT]
+>
+> - The **bi-directional alert synchronization** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+- Enabling **bi-directional sync** will automatically sync the status of original Azure Defender alerts with that of the Azure Sentinel incidents that contain those alerts. So, for example, when an Azure Sentinel incident containing an Azure Defender alert is closed, the corresponding original alert will be closed in Azure Defender automatically.
## Prerequisites -- Your user must have the Security Reader role in the subscription of the logs you stream.
+- You must have read and write permissions on your Azure Sentinel workspace.
+
+- You must have the Security Reader role in the subscriptions of the logs you stream.
+
+- You will need to enable at least one **Azure Defender** plan within Azure Security Center for each subscription for which you want to enable the connector. To enable Azure Defender plans on a subscription, you must have the **Security Admin** role for that subscription.
-- You will need to enable Azure Defender within Azure Security Center. (Standard tier no longer exists, and is no longer a license requirement.)
+- To enable bi-directional sync, you must have the **Contributor** or **Security Admin** role on the relevant subscription.
## Connect to Azure Defender 1. In Azure Sentinel, select **Data connectors** from the navigation menu.
-1. From the data connectors gallery, select **Azure Defender alerts from ASC** (may still be called Azure Security Center), and click the **Open connector page** button.
+1. From the data connectors gallery, select **Azure Defender**, and click **Open connector page** in the details pane.
+
+1. Under **Configuration**, you will see a list of the subscriptions in your tenant, and the status of their connection to Azure Defender. Select the **Status** toggle next to each subscription whose alerts you want to stream into Azure Sentinel. If you want to connect several subscriptions at once, you can do this by marking the check boxes next to the relevant subscriptions and then selecting the **Connect** button on the bar above the list.
+
+ > [!NOTE]
+ > - The check boxes and **Connect** toggles will be active only on the subscriptions for which you have the required permissions.
+ > - The **Connect** button will be active only if at least one subscription's check box has been marked.
+
+1. To enable bi-directional sync on a subscription, locate the subscription in the list, and choose **Enabled** from the drop-down list in the **Bi-directional sync (Preview)** column. To enable bi-directional sync on several subscriptions at once, mark their check boxes and select the **Enable bi-directional sync** button on the bar above the list.
+
+ > [!NOTE]
+ > - The check boxes and drop-down lists will be active only on the subscriptions for which you have the [required permissions](#prerequisites).
+ > - The **Enable bi-directional sync** button will be active only if at least one subscription's check box has been marked.
+
+1. In the **Azure Defender plans** column of the list, you can see if Azure Defender plans are enabled on your subscription (a prerequisite for enabling the connector). The value for each subscription in this column will either be blank (meaning no Defender plans are enabled), "All enabled," or "Some enabled." Those that say "Some enabled" will also have an **Enable all** link you can select, that will take you to your Azure Defender configuration dashboard for that subscription, where you can choose Defender plans to enable. The **Enable Azure Defender for all subscriptions** link button on the bar above the list will take you to your Azure Defender Getting Started page, where you can choose on which subscriptions to enable Azure Defender altogether.
+
+ :::image type="content" source="./media/connect-azure-security-center/azure-defender-config.png" alt-text="Screen shot of Azure Defender connector configuration":::
+
+1. You can select whether you want the alerts from Azure Defender to automatically generate incidents in Azure Sentinel. Under **Create incidents**, select **Enabled** to turn on the default analytics rule that automatically [creates incidents from alerts](create-incidents-from-alerts.md). You can then edit this rule under **Analytics**, in the **Active rules** tab.
+
+## Find and analyze your data
+
+> [!NOTE]
+> Alert synchronization *in both directions* can take a few minutes. Changes in the status of alerts might not be displayed immediately.
+
+- Azure Defender alerts are stored in the *SecurityAlert* table in your Log Analytics workspace.
-1. Under **Configuration**, click **Connect** next to each subscription whose alerts you want to stream into Azure Sentinel. The Connect button will be available only if you have the required permissions.
+- To query Azure Defender alerts in Log Analytics, copy the following into your query window as a starting point:
-1. You can select whether you want the alerts from Azure Defender to automatically generate incidents in Azure Sentinel. Under **Create incidents**, select **Enabled** to turn on the default analytics rule that automatically creates incidents from alerts. You can then edit this rule under **Analytics**, in the **Active rules** tab.
+ ```kusto
+ SecurityAlert
+ | where ProductName == "Azure Security Center"
+ ```
-1. To use the relevant schema in Log Analytics for the Azure Defender alerts, search for **SecurityAlert**.
+- See the **Next steps** tab in the connector page for additional useful sample queries, analytics rule templates, and recommended workbooks.
## Next steps
-In this document, you learned how to connect Azure Defender to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+In this document, you learned how to connect Azure Defender to Azure Sentinel and synchronize alerts between them. To learn more about Azure Sentinel, see the following articles:
-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md).
- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
+- Write your own rules to [detect threats](tutorial-detect-threats-custom.md).
sentinel Create Incidents From Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/create-incidents-from-alerts.md
# Automatically create incidents from Microsoft security alerts
-Alerts triggered in Microsoft security solutions that are connected to Azure Sentinel, such as Microsoft Cloud App Security and Microsoft Defender for Identity (formerly Azure ATP), do not
-automatically create incidents in Azure Sentinel. By default, when you connect a Microsoft solution to Azure Sentinel, any alert generated in that service will
-be stored as raw data in Azure Sentinel, in the Security Alert table in your Azure Sentinel workspace. You can then use that data like any other raw data you
-connect into Azure Sentinel.
+Alerts triggered in Microsoft security solutions that are connected to Azure Sentinel, such as Microsoft Cloud App Security and Microsoft Defender for Identity (formerly Azure ATP), do not automatically create incidents in Azure Sentinel. By default, when you connect a Microsoft solution to Azure Sentinel, any alert generated in that service will be stored as raw data in Azure Sentinel, in the Security Alert table in your Azure Sentinel workspace. You can then use that data like any other raw data you connect into Azure Sentinel.
-You can easily configure Azure Sentinel to automatically create incidents every time an alert is triggered in a connected Microsoft security solution, by following the
-instructions in this article.
+You can easily configure Azure Sentinel to automatically create incidents every time an alert is triggered in a connected Microsoft security solution, by following the instructions in this article.
## Prerequisites+ You must [connect Microsoft security solutions](connect-data-sources.md#data-connection-methods) to enable incident creation from security service alerts. ## Using Microsoft Security incident creation analytics rules
Use the built-in rules available in Azure Sentinel to choose which connected Mic
1. You can modify the rule details, and choose to filter the alerts that will create incidents by alert severity or by text contained in the alertΓÇÖs name.
- For example, if you choose **Azure Defender** (may still be called *Azure Security Center*) in the **Microsoft security service** field and choose **High** in the **Filter by severity** field,
- only high severity Azure Defender alerts will automatically create incidents in Azure Sentinel.
+ For example, if you choose **Azure Defender** (may still be called *Azure Security Center*) in the **Microsoft security service** field and choose **High** in the **Filter by severity** field, only high severity Azure Defender alerts will automatically create incidents in Azure Sentinel.
![Create rule wizard](media/incidents-from-alerts/create-rule-wizard.png)
-1. You can also create a new **Microsoft security** rule that filters alerts from different Microsoft security services by clicking on **+Create** and
- selecting **Microsoft incident creation rule**.
+1. You can also create a new **Microsoft security** rule that filters alerts from different Microsoft security services by clicking on **+Create** and selecting **Microsoft incident creation rule**.
![Incident creation rule](media/incidents-from-alerts/incident-creation-rule.png)
- You can create more than one **Microsoft Security** analytics rule per **Microsoft security service** type. This does not create duplicate incidents, since each rule
-is used as a filter. Even if an alert matches more than one **Microsoft Security** analytics rule, it creates just one Azure Sentinel incident.
+ You can create more than one **Microsoft Security** analytics rule per **Microsoft security service** type. This does not create duplicate incidents, since each rule is used as a filter. Even if an alert matches more than one **Microsoft Security** analytics rule, it creates just one Azure Sentinel incident.
## Enable incident generation automatically during connection
- When you connect a Microsoft security solution, you can select whether you want the alerts from the security solution to automatically generate incidents in Azure Sentinel automatically.
+
+When you connect a Microsoft security solution, you can select whether you want the alerts from the security solution to automatically generate incidents in Azure Sentinel automatically.
1. Connect a Microsoft security solution data source.
is used as a filter. Even if an alert matches more than one **Microsoft Security
## Next steps - To get started with Azure Sentinel, you need a subscription to Microsoft Azure. If you do not have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).-- Learn how to [onboard your data to Azure Sentinel](quickstart-onboard.md), and [get visibility into your data, and potential threats](quickstart-get-visibility.md).
+- Learn how to [onboard your data to Azure Sentinel](quickstart-onboard.md), and [get visibility into your data and potential threats](quickstart-get-visibility.md).
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/quickstart-onboard.md
After you connect your data sources, choose from a gallery of expertly created w
| Workspace geography/region | Azure Sentinel-generated data geography/region | | | |
- | United States<br>India<br>Brazil<br>Africa<br>Korea<br>United Arab Emirates | United States |
- | Europe<br>France<br>Switzerland | Europe |
+ | United States<br>India<br>Africa | United States |
+ | Europe<br>France | Europe |
| Australia | Australia | | United Kingdom | United Kingdom | | Canada | Canada | | Japan | Japan | | Southeast Asia (Singapore) | Southeast Asia (Singapore)* |
+ | Brazil | Brazil |
+ | Norway | Norway |
+ | South Africa | South Africa |
+ | Korea | Korea |
+ | Germany | Germany |
+ | United Arab Emirates | United Arab Emirates |
+ | Switzerland | Switzerland |
| \* There is no paired region for Southeast Asia.
spring-cloud How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-access-data-plane-azure-ad-rbac.md
After the Azure Spring Cloud Data Reader role is assigned, customers can access
>[!NOTE] > If you are using Azure China, please replace `*.azuremicroservices.io` with `*.microservices.azure.cn`, [learn more](/azure/china/resources-developer-guide#check-endpoints-in-azure).
-3. Access the composed endpoint with the access token. Put the access token in a header to provide authorization. Only the "GET" method is supported.
+3. Access the composed endpoint with the access token. Put the access token in a header to provide authorization: `--header 'Authorization: Bearer {TOKEN_FROM_PREVIOUS_STEP}`. Only the "GET" method is supported.
For example, access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/health'* to see the health status of eureka.
storage Data Lake Storage Supported Blob Storage Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
The following table shows how each Blob storage feature is supported with Data L
|Snapshots|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|[Blob snapshots](snapshots-overview.md)| |Static websites|Generally Available<div role="complementary" aria-labelledby="preview-form"></div>|Generally Available<div role="complementary" aria-labelledby="preview-form"></div>|[Static website hosting in Azure Storage](storage-blob-static-website.md)| |Immutable storage|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|[Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md)|
-|Container soft delete|Preview|Preview|[Soft delete for containers (preview)](soft-delete-container-overview.md)|
+|Container soft delete|Preview|Preview|[Soft delete for containers](soft-delete-container-overview.md)|
|Azure Storage inventory|Preview|Preview|[Use Azure Storage inventory to manage blob data (preview)](blob-inventory.md)| |Custom domains|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)| |Blob soft delete|Preview|Preview|[Soft delete for blobs](./soft-delete-blob-overview.md)|
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-protection-overview.md
Previously updated : 04/09/2021 Last updated : 05/10/2021
The following table summarizes the options available in Azure Storage for common
|--|--|--|--|--| | Prevent a storage account from being deleted or modified. | Azure Resource Manager lock<br />[Learn more...](../common/lock-account-resource.md) | Lock all of your storage accounts with an Azure Resource Manager lock to prevent deletion of the storage account. | Protects the storage account against deletion or configuration changes.<br /><br />Does not protect containers or blobs in the account from being deleted or overwritten. | Yes | | Prevent a container and its blobs from being deleted or modified for an interval that you control. | Immutability policy on a container<br />[Learn more...](storage-blob-immutable-storage.md) | Set an immutability policy on a container to protect business-critical documents, for example, in order to meet legal or regulatory compliance requirements. | Protects a container and its blobs from all deletes and overwrites.<br /><br />When a legal hold or a locked time-based retention policy is in effect, the storage account is also protected from deletion. Containers for which no immutability policy has been set are not protected from deletion. | Yes, in preview |
-| Restore a deleted container within a specified interval. | Container soft delete (preview)<br />[Learn more...](soft-delete-container-overview.md) | Enable container soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and blob soft delete together with container soft delete to protect individual blobs in a container.<br /><br />Store containers that require different retention periods in separate storage accounts. | A deleted container and its contents may be restored within the retention period.<br /><br />Only container-level operations (e.g., [Delete Container](/rest/api/storageservices/delete-container)) can be restored. Container soft delete does not enable you to restore an individual blob in the container if that blob is deleted. | Yes, in preview |
+| Restore a deleted container within a specified interval. | Container soft delete<br />[Learn more...](soft-delete-container-overview.md) | Enable container soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and blob soft delete together with container soft delete to protect individual blobs in a container.<br /><br />Store containers that require different retention periods in separate storage accounts. | A deleted container and its contents may be restored within the retention period.<br /><br />Only container-level operations (e.g., [Delete Container](/rest/api/storageservices/delete-container)) can be restored. Container soft delete does not enable you to restore an individual blob in the container if that blob is deleted. | Yes, in preview |
| Automatically save the state of a blob in a previous version when it is overwritten. | Blob versioning<br />[Learn more...](versioning-overview.md) | Enable blob versioning, together with container soft delete and blob soft delete, for storage accounts where you need optimal protection for blob data.<br /><br />Store blob data that does not require versioning in a separate account to limit costs. | Every blob write operation creates a new version. The current version of a blob may be restored from a previous version if the current version is deleted or overwritten. | No | | Restore a deleted blob or blob version within a specified interval. | Blob soft delete<br />[Learn more...](soft-delete-blob-overview.md) | Enable blob soft delete for all storage accounts, with a minimum retention interval of 7 days.<br /><br />Enable blob versioning and container soft delete together with blob soft delete for optimal protection of blob data.<br /><br />Store blobs that require different retention periods in separate storage accounts. | A deleted blob or blob version may be restored within the retention period. | No | | Restore a set of block blobs to a previous point in time. | Point-in-time restore<br />[Learn more...](point-in-time-restore-overview.md) | To use point-in-time restore to revert to an earlier state, design your application to delete individual block blobs rather than deleting containers. | A set of block blobs may be reverted to their state at a specific point in the past.<br /><br />Only operations performed on block blobs are reverted. Any operations performed on containers, page blobs, or append blobs are not reverted. | No |
storage Point In Time Restore Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-manage.md
You can use point-in-time restore to restore one or more sets of block blobs to
To learn more about point-in-time restore, see [Point-in-time restore for block blobs](point-in-time-restore-overview.md). > [!CAUTION]
-> Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers and blobs to protect against accidental deletion. For more information, see [Soft delete for containers (preview)](soft-delete-container-overview.md) and [Soft delete for blobs](soft-delete-blob-overview.md).
+> Point-in-time restore supports restoring operations on block blobs only. Operations on containers cannot be restored. If you delete a container from the storage account by calling the [Delete Container](/rest/api/storageservices/delete-container) operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers and blobs to protect against accidental deletion. For more information, see [Soft delete for containers](soft-delete-container-overview.md) and [Soft delete for blobs](soft-delete-blob-overview.md).
## Enable and configure point-in-time restore
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-overview.md
To initiate a restore operation, a client must have write permissions to all con
Point-in-time restore for block blobs has the following limitations and known issues: - Only block blobs in a standard general-purpose v2 storage account can be restored as part of a point-in-time restore operation. Append blobs, page blobs, and premium block blobs are not restored. -- If you have deleted a container during the retention period, that container will not be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. To learn about protecting containers from deletion, see [Soft delete for containers (preview)](soft-delete-container-overview.md).
+- If you have deleted a container during the retention period, that container will not be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. To learn about protecting containers from deletion, see [Soft delete for containers](soft-delete-container-overview.md).
- If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier is not supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob is not restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](storage-blob-rehydration.md). - If an immutability policy is configured, then a restore operation can be initiated, but any blobs that are protected by the immutability policy will not be modified. A restore operation in this case will not result in the restoration of a consistent state to the date and time given. - A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), is not part of a blob and so is not restored as part of a restore operation.
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/security-recommendations.md
Azure Security Center periodically analyzes the security state of your Azure res
| Use the Azure Resource Manager deployment model | Create new storage accounts using the Azure Resource Manager deployment model for important security enhancements, including superior Azure role-based access control (Azure RBAC) and auditing, Resource Manager-based deployment and governance, access to managed identities, access to Azure Key Vault for secrets, and Azure AD-based authentication and authorization for access to Azure Storage data and resources. If possible, migrate existing storage accounts that use the classic deployment model to use Azure Resource Manager. For more information about Azure Resource Manager, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md). | - | | Enable Azure Defender for all of your storage accounts | Azure Defender for Azure Storage provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit storage accounts. Security alerts are triggered in Azure Security Center when anomalies in activity occur and are also sent via email to subscription administrators, with details of suspicious activity and recommendations on how to investigate and remediate threats. For more information, see [Configure Azure Defender for Azure Storage](../common/azure-defender-storage-configure.md). | [Yes](../../security-center/security-center-remediate-recommendations.md) | | Turn on soft delete for blobs | Soft delete for blobs enables you to recover blob data after it has been deleted. For more information on soft delete for blobs, see [Soft delete for Azure Storage blobs](./soft-delete-blob-overview.md). | - |
-| Turn on soft delete for containers | Soft delete for containers enables you to recover a container after it has been deleted. For more information on soft delete for containers, see [Soft delete for containers (preview)](./soft-delete-container-overview.md). | - |
+| Turn on soft delete for containers | Soft delete for containers enables you to recover a container after it has been deleted. For more information on soft delete for containers, see [Soft delete for containers](./soft-delete-container-overview.md). | - |
| Lock storage account to prevent accidental or malicious deletion or configuration changes | Apply an Azure Resource Manager lock to your storage account to protect the account from accidental or malicious deletion or configuration change. Locking a storage account does not prevent data within that account from being deleted. It only prevents the account itself from being deleted. For more information, see [Apply an Azure Resource Manager lock to a storage account](../common/lock-account-resource.md). | Store business-critical data in immutable blobs | Configure legal holds and time-based retention policies to store blob data in a WORM (Write Once, Read Many) state. Blobs stored immutably can be read, but cannot be modified or deleted for the duration of the retention interval. For more information, see [Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md). | - | | Require secure transfer (HTTPS) to the storage account | When you require secure transfer for a storage account, all requests to the storage account must be made over HTTPS. Any requests made over HTTP are rejected. Microsoft recommends that you always require secure transfer for all of your storage accounts. For more information, see [Require secure transfer to ensure secure connections](../common/storage-require-secure-transfer.md). | - |
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-enable.md
az storage account blob-service-properties show --account-name <storage-account>
Blob soft delete can also protect blobs and directories in accounts that have the hierarchical namespace feature enabled on them. > [!IMPORTANT]
-> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, , and is available globally in all Azure regions.
+> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, and is available globally in all Azure regions.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > >
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
Blob soft delete protects an individual blob, snapshot, or version from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore a soft-deleted object to its state at the time it was deleted. After the retention period has expired, the object is permanently deleted. > [!IMPORTANT]
-> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, , and is available globally in all Azure regions.
+> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, and is available globally in all Azure regions.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > >
storage Soft Delete Container Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-container-enable.md
Title: Enable and manage soft delete for containers (preview)
+ Title: Enable and manage soft delete for containers
-description: Enable container soft delete (preview) to more easily recover your data when it is erroneously modified or deleted.
+description: Enable container soft delete to more easily recover your data when it is erroneously modified or deleted.
Previously updated : 03/05/2021 Last updated : 07/06/2021
-# Enable and manage soft delete for containers (preview)
+# Enable and manage soft delete for containers
-Container soft delete (preview) protects your data from being accidentally or erroneously modified or deleted. When container soft delete is enabled for a storage account, a container and its contents may be recovered after it has been deleted, within a retention period that you specify.
+Container soft delete protects your data from being accidentally or erroneously modified or deleted. When container soft delete is enabled for a storage account, a container and its contents may be recovered after it has been deleted, within a retention period that you specify. For more details about container soft delete, see [Soft delete for containers](soft-delete-container-overview.md).
-If there is a possibility that your data may accidentally be modified or deleted by an application or another storage account user, Microsoft recommends turning on container soft delete. This article shows how to enable soft delete for containers. For more details about container soft delete, including how to register for the preview, see [Soft delete for containers (preview)](soft-delete-container-overview.md).
-
-For end-to-end data protection, Microsoft recommends that you also enable soft delete for blobs and Blob versioning. To learn how to also enable soft delete for blobs, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md). To learn how to enable blob versioning, see [Blob versioning](versioning-overview.md).
-
-> [!IMPORTANT]
->
-> Container soft delete is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+For end-to-end data protection, Microsoft recommends that you also enable soft delete for blobs and blob versioning. To learn how to also enable soft delete for blobs, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md). To learn how to enable blob versioning, see [Blob versioning](versioning-overview.md).
## Enable container soft delete
-You can enable or disable container soft delete for the storage account at any time by using either the Azure portal or an Azure Resource Manager template.
+You can enable or disable container soft delete for the storage account at any time by using the Azure portal, PowerShell, Azure CLI, or an Azure Resource Manager template. Microsoft recommends setting the retention period for container soft delete to a minimum of seven days.
# [Portal](#tab/azure-portal) To enable container soft delete for your storage account by using Azure portal, follow these steps: 1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account.
-1. Locate the **Data Protection** settings under **Blob service**.
-1. Set the **Container soft delete** property to *Enabled*.
-1. Under **Retention policies**, specify how long soft-deleted containers are retained by Azure Storage.
+1. Locate the **Data protection** settings under **Data management**.
+1. Select **Enable soft delete for containers**.
+1. Specify a retention period between 1 and 365 days.
1. Save your changes.
+ :::image type="content" source="media/soft-delete-container-enable/soft-delete-container-portal-configure.png" alt-text="Screenshot showing how to enable container soft delete in Azure portal":::
+
+# [PowerShell](#tab/powershell)
+
+To enable container soft delete with PowerShell, first install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 3.9.0 or later. Next, call the **Enable-AzStorageContainerDeleteRetentionPolicy** command and specify the number of days for the retention period. Remember to replace the values in angle brackets with your own values:
+
+```azurepowershell-interactive
+Enable-AzStorageContainerDeleteRetentionPolicy -ResourceGroupName <resource-group> `
+ -StorageAccountName <storage-account> `
+ -RetentionDays 7
+```
+
+To disable container soft delete, call the **Disable-AzStorageContainerDeleteRetentionPolicy** command.
+
+# [Azure CLI](#tab/azure-cli)
+
+To enable container soft delete with Azure CLI, first install Azure CLI, version 2.26.0 or later. Next, call the [az storage account blob-service-properties update](/cli/azure/storage/account/blob-service-properties#az_storage_account_blob_service_properties_update) command and specify the number of days for the retention period. Remember to replace the values in angle brackets with your own values:
+
+```azurecli-interactive
+az storage account blob-service-properties update \
+ --enable-container-delete-retention true \
+ --container-delete-retention-days 7 \
+ --account-name <storage-account> \
+ --resource-group <resource_group>
+```
+
+To disable container soft delete, specify `false` for the `--enable-container-delete-retention` parameter.
# [Template](#tab/template)
To enable container soft delete with an Azure Resource Manager template, create
} ``` -- 1. Specify the retention period. The default value is 7. 1. Save the template. 1. Specify the resource group of the account, and then choose the **Review + create** button to deploy the template and enable container soft delete. ++ ## View soft-deleted containers When soft delete is enabled, you can view soft-deleted containers in the Azure portal. Soft-deleted containers are visible during the specified retention period. After the retention period expires, a soft-deleted container is permanently deleted and is no longer visible.
You can restore a soft-deleted container and its contents within the retention p
## Next steps -- [Soft delete for containers (preview)](soft-delete-container-overview.md)
+- [Soft delete for containers](soft-delete-container-overview.md)
- [Soft delete for blobs](soft-delete-blob-overview.md) - [Blob versioning](versioning-overview.md)
storage Soft Delete Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-container-overview.md
Title: Soft delete for containers (preview)
+ Title: Soft delete for containers
-description: Soft delete for containers (preview) protects your data so that you can more easily recover your data when it is erroneously modified or deleted by an application or by another storage account user.
+description: Soft delete for containers protects your data so that you can more easily recover your data when it is erroneously modified or deleted by an application or by another storage account user.
Previously updated : 03/05/2021 Last updated : 07/06/2021
-# Soft delete for containers (preview)
+# Soft delete for containers
-Soft delete for containers (preview) protects your data from being accidentally or maliciously deleted. When container soft delete is enabled for a storage account, a deleted container and its contents are retained in Azure Storage for the period that you specify. During the retention period, you can restore previously deleted containers. Restoring a container restores any blobs within that container when it was deleted.
+Container soft delete protects your data from being accidentally deleted by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore a soft-deleted container and its contents to the container's state at the time it was deleted. After the retention period has expired, the container and its contents are permanently deleted.
-For end to end protection for your blob data, Microsoft recommends enabling the following data protection features:
+## Recommended data protection configuration
+
+Blob soft delete is part of a comprehensive data protection strategy for blob data. For optimal protection for your blob data, Microsoft recommends enabling all of the following data protection features:
- Container soft delete, to restore a container that has been deleted. To learn how to enable container soft delete, see [Enable and manage soft delete for containers](soft-delete-container-enable.md). - Blob versioning, to automatically maintain previous versions of a blob. When blob versioning is enabled, you can restore an earlier version of a blob to recover your data if it is erroneously modified or deleted. To learn how to enable blob versioning, see [Enable and manage blob versioning](versioning-enable.md).-- Blob soft delete, to restore a blob or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md).
+- Blob soft delete, to restore a blob, snapshot, or version that has been deleted. To learn how to enable blob soft delete, see [Enable and manage soft delete for blobs](soft-delete-blob-enable.md).
-> [!IMPORTANT]
-> Container soft delete is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
## How container soft delete works When you enable container soft delete, you can specify a retention period for deleted containers that is between 1 and 365 days. The default retention period is 7 days. During the retention period, you can recover a deleted container by calling the **Restore Container** operation.
-When you restore a container, the container's blobs and any blob versions are also restored. However, you can only use container soft delete to restore blobs if the container itself was deleted. To a restore a deleted blob when its parent container has not been deleted, you must use blob soft delete or blob versioning.
+When you restore a container, the container's blobs and any blob versions and snapshots are also restored. However, you can only use container soft delete to restore blobs if the container itself was deleted. To a restore a deleted blob when its parent container has not been deleted, you must use blob soft delete or blob versioning.
> [!WARNING] > Container soft delete can restore only whole containers and their contents at the time of deletion. You cannot restore a deleted blob within a container by using container soft delete. Microsoft recommends also enabling blob soft delete and blob versioning to protect individual blobs in a container.
+>
+> When you restore a container, you must restore it to its original name. If the original name has been used to create a new container, then you will not be able to restore the soft-deleted container.
The following diagram shows how a deleted container can be restored when container soft delete is enabled: :::image type="content" source="media/soft-delete-container-overview/container-soft-delete-diagram.png" alt-text="Diagram showing how a soft-deleted container may be restored":::
-When you restore a container, you can restore it to its original name if that name has not been reused. If the original container name has been used, then you can restore the container with a new name.
- After the retention period has expired, the container is permanently deleted from Azure Storage and cannot be recovered. The clock starts on the retention period at the point that the container is deleted. You can change the retention period at any time, but keep in mind that an updated retention period applies only to newly deleted containers. Previously deleted containers will be permanently deleted based on the retention period that was in effect at the time that the container was deleted. Disabling container soft delete does not result in permanent deletion of containers that were previously soft-deleted. Any soft-deleted containers will be permanently deleted at the expiration of the retention period that was in effect at the time that the container was deleted.
-> [!IMPORTANT]
-> Container soft delete does not protect against the deletion of a storage account. It protects only against the deletion of containers in that account. To protect a storage account from deletion, configure a lock on the storage account resource. For more information about locking a storage account, see [Apply an Azure Resource Manager lock to a storage account](../common/lock-account-resource.md).
-
-## About the preview
-
-Container soft delete is available in preview in all Azure regions.
-
-Version 2019-12-12 or higher of the Azure Storage REST API supports container soft delete.
-
-### Storage account support
- Container soft delete is available for the following types of storage accounts: - General-purpose v2 and v1 storage accounts
Container soft delete is available for the following types of storage accounts:
Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 are also supported.
-### Register for the preview
-
-To enroll in the preview for container soft delete, use PowerShell or Azure CLI to submit a request to register the feature with your subscription. After your request is approved, you can enable container soft delete with any new or existing general-purpose v2, Blob storage, or premium block blob storage accounts.
-
-# [PowerShell](#tab/powershell)
-
-To register with PowerShell, call the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
-
-```powershell
-# Register for container soft delete (preview)
-Register-AzProviderFeature -ProviderNamespace Microsoft.Storage `
- -FeatureName ContainerSoftDelete
-
-# Refresh the Azure Storage provider namespace
-Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To register with Azure CLI, call the [az feature register](/cli/azure/feature#az_feature_register) command.
-
-```azurecli
-az feature register --namespace Microsoft.Storage --name ContainerSoftDelete
-az provider register --namespace 'Microsoft.Storage'
-```
---
-### Check the status of your registration
-
-To check the status of your registration, use PowerShell or Azure CLI.
-
-# [PowerShell](#tab/powershell)
-
-To check the status of your registration with PowerShell, call the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
-
-```powershell
-Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName ContainerSoftDelete
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To check the status of your registration with Azure CLI, call the [az feature](/cli/azure/feature#az_feature_show) command.
-
-```azurecli
-az feature show --namespace Microsoft.Storage --name ContainerSoftDelete
-```
+Version 2019-12-12 or higher of the Azure Storage REST API supports container soft delete.
-
+> [!IMPORTANT]
+> Container soft delete does not protect against the deletion of a storage account, but only against the deletion of containers in that account. To protect a storage account from deletion, configure a lock on the storage account resource. For more information about locking Azure Resource Manager resources, see [Lock resources to prevent unexpected changes](../../azure-resource-manager/management/lock-resources.md).
## Pricing and billing
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/versioning-overview.md
Previously updated : 04/08/2021 Last updated : 05/10/2021
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-create.md
The following table describes the fields on the **Advanced** tab.
| Security | Enable storage account key access (preview) | Optional | When enabled, this setting allows clients to authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). Disabling this setting prevents authorization with the account access keys. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md). | | Security | Minimum TLS version | Required | Select the minimum version of Transport Layer Security (TLS) for incoming requests to the storage account. The default value is TLS version 1.2. When set to the default value, incoming requests made using TLS 1.0 or TLS 1.1 are rejected. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](transport-layer-security-configure-minimum-version.md). | | Data Lake Storage Gen2 | Enable hierarchical namespace | Optional | To use this storage account for Azure Data Lake Storage Gen2 workloads, configure a hierarchical namespace. For more information, see [Introduction to Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md). |
-| Blob storage | Enable network file share (NFS) v3 (preview) | Optional | NFS v3 provides Linux file system compatibility at object storage scale enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises. For more information, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage (preview)](../blobs/network-file-system-protocol-support.md). |
+| Blob storage | Enable network file share (NFS) v3 | Optional | NFS v3 provides Linux file system compatibility at object storage scale enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises. For more information, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage](../blobs/network-file-system-protocol-support.md). |
| Blob storage | Access tier | Required | Blob access tiers enable you to store blob data in the most cost-effective manner, based on usage. Select the hot tier (default) for frequently accessed data. Select the cool tier for infrequently accessed data. For more information, see [Access tiers for Azure Blob Storage - hot, cool, and archive](../blobs/storage-blob-storage-tiers.md). | | Azure Files | Enable large file shares | Optional | Available only for standard file shares with the LRS or ZRS redundancies. | | Tables and queues | Enable support for customer-managed keys | Optional | To enable support for customer-managed keys for tables and queues, you must select this setting at the time that you create the storage account. For more information, see [Create an account that supports customer-managed keys for tables and queues](account-encryption-key-create.md). |
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-disaster-recovery-guidance.md
Previously updated : 06/09/2021 Last updated : 07/07/2021
The customer initiates the account failover to the secondary endpoint. The failo
Write access is restored for geo-redundant accounts once the DNS entry has been updated and requests are being directed to the new primary endpoint. Existing storage service endpoints for blobs, tables, queues, and files remain the same after the failover. > [!IMPORTANT]
-> After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint. To resume replication to the new secondary, configure the account for geo-redundancy again.
+> After the failover is complete, the storage account is configured to be either locally redundant or zone-redundant at the new primary endpoint, depending on whether the original primary was configured for GRS/RA-GRS or GZRS/RA-GZRS. To resume replication to the new secondary, configure the account for geo-redundancy again.
> > Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [Important implications of account failover](storage-initiate-account-failover.md#important-implications-of-account-failover).
Write access is restored for geo-redundant accounts once the DNS entry has been
Because data is written asynchronously from the primary region to the secondary region, there is always a delay before a write to the primary region is copied to the secondary region. If the primary region becomes unavailable, the most recent writes may not yet have been copied to the secondary region.
-When you force a failover, all data in the primary region is lost as the secondary region becomes the new primary region and the storage account is configured to be locally redundant. All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that has not also been copied to the secondary is lost permanently.
+When you force a failover, all data in the primary region is lost as the secondary region becomes the new primary region. If the primary was configured for GRS or RA-GRS, then the new primary will be locally redundant (LRS) after failover. If the primary was configured for GZRS or RA-GZRS, then the new primary will be zone-redundant (ZRS) after failover.
+
+All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that has not also been copied to the secondary is lost permanently.
The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. All data written prior to the last sync time is available on the secondary, while data written after the last sync time may not have been written to the secondary and may be lost. Use this property in the event of an outage to estimate the amount of data loss you may incur by initiating an account failover.
For more information about checking the **Last Sync Time** property, see [Check
### Use caution when failing back to the original primary
-After you fail over from the primary to the secondary region, your storage account is configured to be locally redundant in the new primary region. You can then configure the account for geo-redundancy again. When the account is configured for geo-redundancy again after a failover, the new primary region immediately begins copying data to the new secondary region, which was the primary before the original failover. However, it may take some time before existing data in the primary is fully copied to the new secondary.
+After you fail over from the primary to the secondary region, your storage account is configured to be either locally redundant or zone-redundant in the new primary region, depending on whether the original configuration was GRS/RA-GRS or GZRS/RA-GZRS. You can then configure the account for geo-redundancy again. When the account is configured for geo-redundancy again after a failover, the new primary region immediately begins copying data to the new secondary region, which was the primary before the original failover. However, it may take some time before existing data in the primary is fully copied to the new secondary.
-After the storage account is reconfigured for geo-redundancy, it's possible to initiate another failover from the new primary back to the new secondary. In this case, the original primary region prior to the failover becomes the primary region again, and is configured to be locally redundant. All data in the post-failover primary region (the original secondary) is then lost. If most of the data in the storage account has not been copied to the new secondary before you fail back, you could suffer a major data loss.
+After the storage account is reconfigured for geo-redundancy, it's possible to initiate another failover from the new primary back to the new secondary. In this case, the original primary region prior to the failover becomes the primary region again, and is configured to be either locally redundant or zone-redundant, depending on whether the original primary configuration was GRS/RA-GRS or GZRS/RA-GZRS. All data in the post-failover primary region (the original secondary) is then lost. If most of the data in the storage account has not been copied to the new secondary before you fail back, you could suffer a major data loss.
-To avoid a major data loss, check the value of the **Last Sync Time** property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate expected data loss.
+To avoid a major data loss, check the value of the **Last Sync Time** property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate expected data loss.
## Initiate an account failover
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/transport-layer-security-configure-minimum-version.md
Previously updated : 04/29/2021 Last updated : 07/07/2021
Communication between a client application and an Azure Storage account is encry
Azure Storage currently supports three versions of the TLS protocol: 1.0, 1.1, and 1.2. Azure Storage uses TLS 1.2 on public HTTPS endpoints, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
-By default, Azure Storage accounts permit clients to send and receive data with the oldest version of TLS, TLS 1.0, and above. To enforce stricter security measures, you can configure your storage account to require that clients send and receive data with a newer version of TLS. If a storage account requires a minimum version of TLS, then any requests made with an older version will fail.
+Azure Storage accounts permit clients to send and receive data with the oldest version of TLS, TLS 1.0, and above. To enforce stricter security measures, you can configure your storage account to require that clients send and receive data with a newer version of TLS. If a storage account requires a minimum version of TLS, then any requests made with an older version will fail.
This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage secure TLS for your storage accounts.
When you are confident that traffic from clients using older versions of TLS is
To configure the minimum TLS version for a storage account, set the **MinimumTlsVersion** version for the account. This property is available for all storage accounts that are created with the Azure Resource Manager deployment model. For more information about the Azure Resource Manager deployment model, see [Storage account overview](storage-account-overview.md).
-The **MinimumTlsVersion** property is not set by default and does not return a value until you explicitly set it. If the property value is **null**, then the storage account will permit requests sent with TLS version 1.0 or greater.
+The default value of the **MinimumTlsVersion** property is different depending on how you set it. When you create a storage account with the Azure portal, the minimum TLS version is set to 1.2 by default. When you create a storage account with PowerShell, Azure CLI, or an Azure Resource Manager template, the **MinimumTlsVersion** property is not set by default and does not return a value until you explicitly set it.
+
+When the **MinimumTlsVersion** property is not set, its value may be displayed as either **null** or an empty string, depending on the context. The storage account will permit requests sent with TLS version 1.0 or greater if the property is not set.
# [Portal](#tab/portal)
When you create a storage account with the Azure portal, the minimum TLS version
To configure the minimum TLS version for an existing storage account with the Azure portal, follow these steps: 1. Navigate to your storage account in the Azure portal.
-1. Under **Settings** select the **Configuration**.
+1. Under **Settings**, select **Configuration**.
1. Under **Minimum TLS version**, use the drop-down to select the minimum version of TLS required to access data in this storage account. :::image type="content" source="media/transport-layer-security-configure-minimum-version/configure-minimum-version-portal.png" alt-text="Screenshot showing how to configure minimum version of TLS in the Azure portal." lightbox="media/transport-layer-security-configure-minimum-version/configure-minimum-version-portal.png":::
stream-analytics Blob Storage Azure Data Lake Gen2 Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/blob-storage-azure-data-lake-gen2-output.md
Previously updated : 05/30/2021 Last updated : 07/7/2021 # Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics
For the maximum message size, see [Azure Storage limits](../azure-resource-manag
## Next steps
-* [Use Managed Identity (preview) to authenticate your Azure Stream Analytics job to Azure Blob Storage](blob-output-managed-identity.md)
+* [Use Managed Identity to authenticate your Azure Stream Analytics job to Azure Blob Storage](blob-output-managed-identity.md)
* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
stream-analytics Power Bi Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/power-bi-output.md
Previously updated : 4/7/2021 Last updated : 7/7/2021 # Power BI output from Azure Stream Analytics
For more info on output batch size, see [Power BI Rest API limits](/power-bi/dev
## Next steps
-* [Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI (preview)](powerbi-output-managed-identity.md)
+* [Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI](powerbi-output-managed-identity.md)
* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-automl.md
For this tutorial, you need a Spark table. The following notebook creates one:
To open the wizard:
-1. Right-click the Spark table that you created in the previous step. Then select **Machine Learning** > **Enrich with new model**.
-![Screenshot of the Spark table, with Machine Learning and Enrich with new model highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00d.png)
+1. Right-click the Spark table that you created in the previous step. Then select **Machine Learning** > **Train a new model**.
+![Screenshot of the Spark table, with Machine Learning and Train a new model highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00d.png)
1. Provide configuration details for creating an automated machine learning experiment run in Azure Machine Learning. This run trains multiple models. The best model from a successful run is registered in the Azure Machine Learning model registry.
synapse-analytics Tutorial Sql Pool Model Scoring Wizard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard.md
Before you run all cells in the notebook, check that the compute instance is run
![Load data to dedicated SQL pool](media/tutorial-sql-pool-model-scoring-wizard/tutorial-sql-scoring-wizard-00b.png)
-1. Go to **Data** > **Workspace**. Open the SQL scoring wizard by right-clicking the dedicated SQL pool table. Select **Machine Learning** > **Enrich with existing model**.
+1. Go to **Data** > **Workspace**. Open the SQL scoring wizard by right-clicking the dedicated SQL pool table. Select **Machine Learning** > **Predict with a model**.
> [!NOTE] > The machine learning option does not appear unless you have a linked service created for Azure Machine Learning. (See [Prerequisites](#prerequisites) at the beginning of this tutorial.)
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
And you can customize the workbook by Kusto query and configure alerts.
| order by TimeGenerated asc ```
+## Write custom application logs
+
+You can use the Apache Log4j library to write custom logs.
+
+Example for Scala:
+
+```scala
+%%spark
+val logger = org.apache.log4j.LogManager.getLogger("com.contoso.LoggerExample")
+logger.info("info message")
+logger.warn("warn message")
+logger.error("error message")
+```
+
+Example for PySpark:
+
+```python
+%%pyspark
+logger = sc._jvm.org.apache.log4j.LogManager.getLogger("com.contoso.PythonLoggerExample")
+logger.info("info message")
+logger.warn("warn message")
+logger.error("error message")
+```
+ ## Create and manage alerts using Azure Log Analytics Azure Monitor alerts allow users to use a Log Analytics query to evaluate metrics and logs every set frequency, and fire an alert based on the results.
Azure Synapse Analytics workspace with [managed virtual network](../security/syn
- Learn how to [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md). - Learn how to [Run a Spark application in notebook](./apache-spark-development-using-notebooks.md).
+ - Learn how to [Create Apache Spark job definition in Synapse Studio](./apache-spark-job-definitions.md).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Delta Lake support is currently in public preview in serverless SQL pools. There
- Do not specify wildcards to describe the partition schema. Delta Lake query will automatically identify the Delta Lake partitions. - Delta Lake tables created in the Apache Spark pools are not synchronized in serverless SQL pool. You cannot query Apache Spark pools Delta Lake tables using T-SQL language. - External tables do not support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake folder to leverage the partition elimination. See known issues and workarounds below.-- Serverless SQL pools do not support time travel queries. You can vote for this feature on [Azure feedback site](https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/43656111-add-time-travel-feature-in-delta-lake)-- Serverless SQL pools do not support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Azure Synapse Analytics [to update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data) or [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
+- Serverless SQL pools do not support time travel queries. You can vote for this feature on [Azure feedback site](https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/43656111-add-time-travel-feature-in-delta-lake). Use Apache Spark pools in Azure Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel).
+- Serverless SQL pools do not support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Azure Synapse Analytics [to update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
- Delta Lake support is not available in dedicated SQL pools. Make sure that you are using serverless pools to query Delta Lake files. You can propose ideas and enhancements on [Azure Synapse feedback site](https://feedback.azure.com/forums/307516-azure-synapse-analytics?category_id=171048).
Easiest way is to grant yourself 'Storage Blob Data Contributor' role on the sto
### Partitioning column returns NULL values
-If you are using views over the `OPENROWSET` function that read partitioned Delta Lake folder, you might get the value `NULL` instead of the actual column values for the partitioning columns. Due to the known issue, the `OPENROWSET` function with the `WITH` clause cannot read partitioning columns. The [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake should not have the `OPENROWSET` function with the `WITH` clause. You need to use the `OPENROWSET` function that doesn't have explicitly specified schema.
+If you are using views over the `OPENROWSET` function that read partitioned Delta Lake folder, you might get the value `NULL` instead of the actual column values for the partitioning columns. An example of a view that references `Year` and `Month` partitioning columns is shown in the following example:
-**Workaround:** Remove the `WITH` clause form the `OPENROWSET` function that is used in the views.
+```sql
+create or alter view test as
+select top 10 *
+from openrowset(bulk 'https://storageaccount.blob.core.windows.net/path/to/delta/lake/folder',
+ format = 'delta')
+ with (ID int, Year int, Month int, Temperature float)
+ as rows
+```
+
+Due to the known issue, the `OPENROWSET` function with the `WITH` clause cannot read the values from the partitioning columns. The [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake should not have the `OPENROWSET` function with the `WITH` clause. You need to use the `OPENROWSET` function that doesn't have explicitly specified schema.
+
+**Workaround:** Remove the `WITH` clause from the `OPENROWSET` function that is used in the views - example:
+
+```sql
+create or alter view test as
+select top 10 *
+from openrowset(bulk 'https://storageaccount.blob.core.windows.net/path/to/delta/lake/folder',
+ format = 'delta')
+ --with (ID int, Year int, Month int, Temperature float)
+ as rows
+```
### Query failed because of a topology change or compute container failure
CREATE DATABASE mydb
COLLATE Latin1_General_100_BIN2_UTF8; ```
-The queries executed via master database are affected with this issue.
+The queries executed via master database are affected with this issue. This is not applicable on all queries that are reading partitioned data. The data sets partitioned by string columns are affected by this issue.
**Workaround:** Execute the queries on a custom database with `Latin1_General_100_BIN2_UTF8` database collation.
The queries executed via master database are affected with this issue.
You are trying to read Delta Lake files that contain some nested type columns without specifying WITH clause (using automatic schema inference). Automatic schema inference doesn't work with the nested columns in Delta Lake.
-**Workaround:** Use the `WITH` clause and explicitly assign the `VARCHAR` type to the nested columns.
+**Workaround:** Use the `WITH` clause and explicitly assign the `VARCHAR` type to the nested columns. Note that this will not work if your data set is partitioned, due to another known issue where `WITH` clause returns `NULL` for partition columns. Partitioned data sets with complex type columns are currently not supported.
### Cannot find value of partitioning column in file
JSON text is not properly formatted. Unexpected character '{' is found at positi
Msg 16513, Level 16, State 0, Line 1 Error reading external metadata. ```-
+First, make sure that your Delta Lake data set is not corrupted.
- Verify that you can read the content of the Delta Lake folder using Apache Spark pool in Synapse or Databricks cluster. This way you will ensure that the `_delta_log` file is not corrupted. - Verify that you can read the content of data files by specifying `FORMAT='PARQUET'` and using recursive wildcard `/**` at the end of the URI path. If you can read all Parquet files, the issue is in `_delta_log` transaction log folder.
-In this case, report a support ticket and provide a repro to Azure support:
+**Workaround:** This problem might happen if you are using some `_UTF8` database collation. Try to run a query on `master` database or any other database that has non-UTF8 collation. If this workaround resolves your issue, use a database without `_UTF8` collation.
+
+In the data set is valid, and the workaround cannot help, report a support ticket and provide a repro to Azure support:
- Do not make any changes like adding/removing the columns or optimizing the table because this might change the state of Delta Lake transaction log files. - Copy the content of `_delta_log` folder into a new empty folder. **DO NOT** copy `.parquet data` files. - Try to read the content that you copied in new folder and verify that you are getting the same error. - Now you can continue using Delta Lake folder with Spark pool. You will provide copied data to Microsoft support if you are allowed to share this. - Send the content of the copied `_delta_log` file to Azure support.
-Microsoft team will investigate the content of the `delta_log` file and provide more info about the possible errors and workarounds.
+Azure team will investigate the content of the `delta_log` file and provide more info about the possible errors and the workarounds.
## Constraints
time-series-insights Tutorials Model Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/tutorials-model-sync.md
Time Series ID is a unique identifier used to identify assets in Time Series Ins
Contextualization of data (mostly spatial in nature) in Time Series Insights is achieved through asset hierarchies and the same is used for easy navigation of data through a tree view in Time Series Insights explorer. Time series types, and hierarchies are defined using Time Series Model (TSM) in Time Series Insights. Types in TSM help to define variables, while hierarchy levels and instance field values are used to construct the tree view in the Time Series Insights explorer. For more information on TSM, refer to [online Time Series Insights documentation](./concepts-model-overview.md).
-In Azure Digital Twins, connection among assets are expressed using twin relationships. Twin relationships are simply a graph of connected assets. However in Time Series Insight, relationships between assets are hierarchical in nature. That is, assets share a parent-child kind od relationship and is represented using a tree structure. To translate relationship information from Azure Digital Twins into Time Series Insights hierarchies, we need to choose relevant hierarchical relationships from Azure Digital Twins. Azure Digital Twins uses an open standard, modeling language called Digital Twin Definition Language (DTDL). In DTDL models are described using a variant of JSON called JSON-LD. Refer to [DTDL documentation](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for full details on the specification.
+In Azure Digital Twins, connection among assets are expressed using twin relationships. Twin relationships are simply a graph of connected assets. However in Time Series Insight, relationships between assets are hierarchical in nature. That is, assets share a parent-child kind of relationship and is represented using a tree structure. To translate relationship information from Azure Digital Twins into Time Series Insights hierarchies, we need to choose relevant hierarchical relationships from Azure Digital Twins. Azure Digital Twins uses an open standard, modeling language called Digital Twin Definition Language (DTDL). In DTDL models are described using a variant of JSON called JSON-LD. Refer to [DTDL documentation](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for full details on the specification.
[![Connection between assets](media/tutorials-model-sync/asset-connection.png)](media/tutorials-model-sync/asset-connection.png#lightbox)
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/image-builder-overview.md
The Azure VM Image Builder (Azure Image Builder) lets you start with a Windows o
Azure Image Builder supports the following features: - Creation of baseline images, that includes your minimum security and corporate configurations, and allow departments to customize it further.-- Integration of core applications, so VM can take on workloads after creation, or add configurations to support Windows Virtual Desktop images.
+- Integration of core applications, so VM can take on workloads after creation, or add configurations to support Azure Virtual Desktop images.
- Patching of existing images, Image Builder will allow you to continually patch existing custom images. - Connect image builder to your existing virtual networks, so you can connect to existing configuration servers (DSC, Chef, Puppet etc.), file shares, or any other routable servers/services. - Integration with the Azure Shared Image Gallery, allows you to distribute, version, and scale images globally, and gives you an image management system.
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/create-upload-centos.md
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF datasource_list: [ Azure ] datasource:
- Azure:
- apply_network_config: False
+ Azure:
+ apply_network_config: False
EOF if [[ -f /mnt/resource/swapfile ]]; then
Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
## Next steps
-You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/create-upload-ubuntu.md
Title: Create and upload an Ubuntu Linux VHD in Azure description: Learn to create and upload an Azure virtual hard disk (VHD) that contains an Ubuntu Linux operating system.-+ Previously updated : 06/06/2020- Last updated : 07/07/2021+ # Prepare an Ubuntu virtual machine for Azure
Ubuntu now publishes official Azure VHDs for download at [https://cloud-images.ubuntu.com/](https://cloud-images.ubuntu.com/). If you need to build your own specialized Ubuntu image for Azure, rather than use the manual procedure below it is recommended to start with these known working VHDs and customize as needed. The latest image releases can always be found at the following locations:
-* Ubuntu 16.04/Xenial: [ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk](https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.vmdk)
-* Ubuntu 18.04/Bionic: [bionic-server-cloudimg-amd64.vmdk](https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.vmdk)
+* Ubuntu 16.04/Xenial: [xenial-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-azure.vhd.zip)
+* Ubuntu 18.04/Bionic: [bionic-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64-azure.vhd.zip)
+* Ubuntu 20.04/Focal: [focal-server-cloudimg-amd64-azure.vhd.zip](https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64-azure.vhd.zip)
## Prerequisites This article assumes that you have already installed an Ubuntu Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
This article assumes that you have already installed an Ubuntu Linux operating s
8. Remove cloud-init default configs and leftover netplan artifacts that may conflict with cloud-init provisioning on Azure: ```console
- # rm -f /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg
+ # rm -f /etc/cloud/cloud.cfg.d/50-curtin-networking.cfg /etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg /etc/cloud/cloud.cfg.d/99-installer.cfg /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg
# rm -f /etc/cloud/ds-identify.cfg # rm -f /etc/netplan/*.yaml ```
This article assumes that you have already installed an Ubuntu Linux operating s
## Next steps
-You're now ready to use your Ubuntu Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+You're now ready to use your Ubuntu Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/redhat-create-upload-vhd.md
This section assumes that you have already obtained an ISO file from the Red Hat
``` If you want mount, format and create swap you can either:
- * Pass this in as a cloud-init config every time you create a VM
- * Use a cloud-init directive baked into the image that will do this every time the VM is created:
+ * Pass this in as a cloud-init config every time you create a VM through customdata. This is the recommended method.
+ * Use a cloud-init directive baked into the image that will do this every time the VM is created.
```console
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
This section assumes that you have already obtained an ISO file from the Red Hat
ResourceDisk.EnableSwap=n ```
- If you want mount, format and create swap you can either:
- * Pass this in as a cloud-init config every time you create a VM
- * Use a cloud-init directive baked into the image that will do this every time the VM is created:
+ * Pass this in as a cloud-init config every time you create a VM through customdata. This is the recommended method.
+ * Use a cloud-init directive baked into the image that will do this every time the VM is created.
```console
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nda100-v4-series.md
Nvidia NVLink Interconnect: Supported<br>
||||||||||| | Standard_ND96asr_v4 | 96 | 900 | 6000 | 8 A100 40 GB GPUs (NVLink 3.0) | 40 | 32 | 80,000 / 800 | 24,000 Mbps | 8 |
+The ND A100 v4 series supports the following kernel versions:
+- CentOS 7.9 HPC: 3.10.0-1160.24.1.el7.x86_64 <br>
+- Ubuntu 18.04: 5.4.0-1043-azure <br>
+- Ubuntu 20.04: 5.4.0-1046-azure <br>
## Other sizes
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/trusted-launch.md
Azure offers trusted launch as a seamless way to improve the security of [genera
**Regions**: - Central US
+- East US
- East US 2
+- North Central US
- South Central US
+- West US
+- West US 2
- North Europe - West Europe
+- Japan East
+- South East Asia
**Pricing**: No additional cost to existing VM pricing.
virtual-machines Image Builder Gallery Update Image Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/image-builder-gallery-update-image-version.md
Title: Create a new image version from an existing image version using Azure Image Builder description: Create a new VM image version from an existing image version using Azure Image Builder in Windows.--++ Last updated 03/02/2021
Submit the image configuration to the VM Image Builder Service.
```azurecli-interactive az resource create \ --resource-group $sigResourceGroup \
+ --location $location \
--properties @helloImageTemplateforSIGfromWinSIG.json \ --is-full-object \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/run-command.md
The following restrictions apply when you're using Run Command:
> [!NOTE] > To function correctly, Run Command requires connectivity (port 443) to Azure public IP addresses. If the extension doesn't have access to these endpoints, the scripts might run successfully but not return the results. If you're blocking traffic on the virtual machine, you can use [service tags](../../virtual-network/network-security-groups-overview.md#service-tags) to allow traffic to Azure public IP addresses by using the `AzureCloud` tag.
+>
+> The Run Command feature doesn't work if the VM agent status is NOT READY. Check the agent status in the VM's properties in the Azure portal.
## Available commands
virtual-network Manage Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/manage-public-ip-address-prefix.md
The following section details the parameters when creating a public IP prefix.
|IP version|Yes| IP version of the prefix (v4 or v6). |Prefix size|Yes| The size of the prefix you need. A range with 16 IP addresses (/28 for v4 or /124 for v6) is the default.
-Instead, you may use the CLI and PowerShell commands below to create a public IP address prefix.
+Alternatively, you may use the CLI and PowerShell commands below to create a public IP address prefix.
**Commands**