Updates from: 03/16/2021 04:08:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Contentdefinitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/contentdefinitions.md
Previously updated : 10/26/2020 Last updated : 02/15/2021
The format of the value must contain the word `contract`: _urn:com:microsoft:aad
| Old DataUri value | New DataUri value | | -- | -- |
-| `urn:com:microsoft:aad:b2c:elements:globalexception:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:globalexception:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:idpselection:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0` |
+| `urn:com:microsoft:aad:b2c:elements:globalexception:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.1` |
+| `urn:com:microsoft:aad:b2c:elements:globalexception:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.1` |
+| `urn:com:microsoft:aad:b2c:elements:idpselection:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.1` |
+| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2` |
+| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2` |
+| `urn:com:microsoft:aad:b2c:elements:unifiedssd:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssd:2.1.2` |
+| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.2` |
+| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.2` |
| `urn:com:microsoft:aad:b2c:elements:multifactor:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0` | | `urn:com:microsoft:aad:b2c:elements:multifactor:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:selfasserted:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:unifiedssd:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssd:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.0.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:1.2.0` |
-| `urn:com:microsoft:aad:b2c:elements:unifiedssp:1.1.0` | `urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:1.2.0` |
-The following example shows the content definition identifiers and the corresponding **DataUri** with page contract:
+The following example shows the content definition identifiers and the corresponding **DataUri** with latest page version:
```xml
-<ContentDefinitions>
- <ContentDefinition Id="api.error">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.idpselections">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.idpselections.signup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.signuporsignin">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.selfasserted">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.selfasserted.profileupdate">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:1.2.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.phonefactor">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.0</DataUri>
- </ContentDefinition>
-</ContentDefinitions>
+<!--
+<BuildingBlocks> -->
+ <ContentDefinitions>
+ <ContentDefinition Id="api.error">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:globalexception:1.2.1</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.1</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.idpselections.signup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:providerselection:1.2.1</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.signuporsignin">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:unifiedssp:2.1.2</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.selfasserted.profileupdate">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountsignup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.phonefactor">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:multifactor:1.2.2</DataUri>
+ </ContentDefinition>
+ </ContentDefinitions>
+<!--
+</BuildingBlocks> -->
``` ### Metadata
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
+
+ Title: Enable Azure AD B2C custom domains
+
+description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C.
+++++++ Last updated : 03/15/2021++
+zone_pivot_groups: b2c-policy-type
++
+# Enable custom domains for Azure Active Directory B2C
++
+This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.
+
+![Custom domain user experience](./media/custom-domain/custom-domain-user-experience.png)
+
+## Custom domain overview
+
+You can enable custom domains for Azure AD B2C by using [Azure Front Door](https://azure.microsoft.com/services/frontdoor/). Azure Front Door is a global, scalable entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. You can render Azure AD B2C content behind Azure Front Door, and then configure an option in Azure Front Door to deliver the content via a custom domain in your application's URL.
+
+The following diagram illustrates Azure Front Door integration:
+
+1. From an application, a user clicks the sign-in button, which takes them to the Azure AD B2C sign-in page. This page specifies a custom domain name.
+1. The web browser resolves the custom domain name to the Azure Front Door IP address. During DNS resolution, a canonical name (CNAME) record with a custom domain name points to your Front Door default front-end host (for example, `contoso.azurefd.net`).
+1. The traffic addressed to the custom domain (for example, `login.contoso.com`) is routed to the specified Front Door default front-end host (`contoso.azurefd.net`).
+1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `<tenant-name>.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes a custom HTTP header that contains the original custom domain name.
+1. Azure AD B2C responds to the request by displaying the relevant content and the original custom domain.
+
+![Custom domain networking diagram](./media/custom-domain/custom-domain-network-flow.png)
+
+> [!IMPORTANT]
+> The connection from the browser to Azure Front Door should always use IPv4 instead of IPv6.
+
+When using custom domains, consider the following:
+
+- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits) for Azure Front Door.
+- Azure Front Door is a separate Azure service, so additional charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).
+- Currently, the Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md) feature is not supported.
+- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name).
+- If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
+
+## Prerequisites
+++
+## Add a custom domain name to your tenant
+
+Follow the guidance for how to [add and validate your custom domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md). After the domain is verified, delete the DNS TXT record you created.
+
+> [!IMPORTANT]
+> For these steps, be sure to sign in to your **Azure AD B2C** tenant and select the **Azure Active Directory** service.
+
+Verify each subdomain you plan to use. Verifying just the top-level domain isn't sufficient. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*.
+
+## Create a new Azure Front Door instance
+
+Follow the steps for [creating a Front Door for your application](../frontdoor/quickstart-create-front-door.md#create-a-front-door-for-your-application) using the default settings for the frontend host and routing rules.
+
+> [!IMPORTANT]
+> For these steps, after you sign in to the Azure portal in step 1, select **Directory + subscription** and choose the directory that contains the Azure subscription youΓÇÖd like to use for Azure Front Door. This should *not* be the directory containing your Azure AD B2C tenant.
+
+In the step **Add a backend**, use the following settings:
+
+* For **Backend host type**, select **Custom host**.
+* For **Backend host name**, select the hostname for your Azure AD B2C endpoint, <tenant-name>.b2clogin.com. For example, contoso.b2clogin.com.
+* For **Backend host header**, select the same value you selected for **Backend host name**.
+
+![Add a backend](./media/custom-domain/add-a-backend.png)
+
+After you add the **backend** to the **backend pool**, disable the **Health probes**.
+
+![Add a backend pool](./media/custom-domain/add-a-backend-pool.png)
+
+## Set up your custom domain on Azure Front Door
+
+Follow the steps to [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md). When creating the `CNAME` record for your custom domain, use the custom domain name you verified earlier in the [Add a custom domain name to your Azure AD](#add-a-custom-domain-name-to-your-tenant) step.
+
+After the custom domain name is verified, select **Custom domain name HTTPS**. Then under the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate).
+
+The following screenshot shows how to add a custom domain and enable HTTPS using an Azure Front Door certificate.
+
+![Set up Azure Front Door custom domain](./media/custom-domain/azure-front-door-add-custom-domain.png)
+
+## Configure CORS
+
+If you [customize the Azure AD B2C user interface](customize-ui-with-html.md) with an HTML template, you need to [Configure CORS](customize-ui-with-html.md?pivots=b2c-user-flow.md#3-configure-cors) with your custom domain.
+
+Configure Azure Blob storage for Cross-Origin Resource Sharing with the following steps:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your storage account.
+1. In the menu, select **CORS**.
+1. For **Allowed origins**, enter `https://your-domain-name`. Replace `your-domain-name` with your domain name. For example, `https://login.contoso.com`. Use all lowercase letters when entering your tenant name.
+1. For **Allowed Methods**, select both `GET` and `OPTIONS`.
+1. For **Allowed Headers**, enter an asterisk (*).
+1. For **Exposed Headers**, enter an asterisk (*).
+1. For **Max age**, enter 200.
+1. Select **Save**.
+
+## Configure your identity provider
+
+When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign-in process. The authorization request specifies the `redirect_uri` with the Azure AD B2C default domain name:
+
+```http
+https://<tenant-name>.b2clogin.com/<tenant-name>/oauth2/authresp
+```
+
+If you configured your policy to allow sign-in with an external identity provider, update the OAuth redirect URIs with the custom domain. Most identity providers allow you to register multiple redirect URIs. We recommend adding redirect URIs instead of replacing them so you can test your custom policy without affecting applications that use the Azure AD B2C default domain name.
+
+In the following redirect URI:
+
+```http
+https://<custom-domain-name>/<tenant-name>/oauth2/authresp
+```
+
+- Replace **<custom-domain-name>** with your custom domain name.
+- Replace **<tenant-name>** with the name of your tenant, or your tenant ID.
+
+The following example shows a valid OAuth redirect URI:
+
+```http
+https://login.contoso.com/contoso.onmicrosoft.com/oauth2/authresp
+```
+
+If you choose to use the [tenant ID](#optional-use-tenant-id), a valid OAuth redirect URI would look like the following:
+
+```http
+https://login.contoso.com/11111111-1111-1111-1111-111111111111/oauth2/authresp
+```
+
+The [SAML identity providers](saml-identity-provider-technical-profile.md) metadata would look like the following:
+
+```http
+https://<custom-domain-name>.b2clogin.com/<tenant-name>/<your-policy>/samlp/metadata?idptp=<your-technical-profile>
+```
+
+## Test your custom domain
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Under **Policies**, select **User flows (policies)**.
+1. Select a user flow, and then select **Run user flow**.
+1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Copy to clipboard**.
+
+ ![Copy the authorization request URI](./media/custom-domain/user-flow-run-now.png)
+
+1. In the **Run user flow endpoint** URL, replace the Azure AD B2C domain (<tenant-name>.b2clogin.com) with your custom domain.
+ For example, instead of:
+
+ ```http
+ https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login
+ ```
+
+ use:
+
+ ```http
+ https://login.contoso.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login
+ ```
+1. Select **Run user flow**. Your Azure AD B2C policy should load.
+1. Sign-in with both local and social accounts.
+1. Repeat the test with the rest of your policies.
+
+## Configure your application
+
+After you configure and test the custom domain, you can update your applications to load the URL that specifies your custom domain as the hostname instead of the Azure AD B2C domain.
+
+The custom domain integration applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints may look like the following:
+
+- <code>https://\<custom-domain\>/\<tenant-name\>/<b>\<policy-name\></b>/v2.0/.well-known/openid-configuration</code>
+
+- <code>https://\<custom-domain\>/\<tenant-name\>/<b>\<policy-name\></b>/oauth2/v2.0/authorize</code>
+
+- <code>https://\<custom-domain\>/<tenant-name\>/<b>\<policy-name\></b>/oauth2/v2.0/token</code>
+
+Replace:
+- **custom-domain** with your custom domain
+- **tenant-name** with your tenant name or tenant ID
+- **policy-name** with your policy name. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies).
++
+The [SAML service provider](connect-with-saml-service-providers.md) metadata may look like the following:
+
+```html
+https://custom-domain-name/tenant-name/policy-name/Samlp/metadata
+```
+
+### (Optional) Use tenant ID
+
+You can replace your B2C tenant name in the URL with your tenant ID GUID so as to remove all references to ΓÇ£b2cΓÇ¥ in the URL. You can find your tenant ID GUID in the B2C Overview page in Azure portal.
+For example, change `https://account.contosobank.co.uk/contosobank.onmicrosoft.com/`
+to
+`https://account.contosobank.co.uk/<tenant ID GUID>/`
+
+If you choose to use tenant ID instead of tenant name, be sure to update the identity provider **OAuth redirect URIs** accordingly. For more information, see [Configure your identity provider](#configure-your-identity-provider).
+
+### Token issuance
+
+The token issuer name (iss) claim changes based on the custom domain being used. For example:
+
+```http
+https://<domain-name>/11111111-1111-1111-1111-111111111111/v2.0/
+```
++
+## Block access to the default domain name
+
+After you add the custom domain and configure your application, users will still be able to access the <tenant-name>.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message.
+
+1. Get the example of a conditional access policy that checks the host name from [GitHub](https://github.com/azure-ad-b2c/samples/blob/master/policies/check-host-name).
+1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`.
+1. Upload the policy files in the following order: `B2C_1A_TrustFrameworkExtensions_HostName.xml` and then `B2C_1A_signup_signin_HostName.xml`.
++
+## Troubleshooting
+
+### Azure AD B2C returns a page not found error
+
+- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get an HTTP 404 error message.
+- **Possible causes** - This issue could be related to the DNS configuration or the Azure Front Door backend configuration.
+- **Resolution**:
+ 1. Make sure the custom domain is [registered and successfully verified](#add-a-custom-domain-name-to-your-tenant) in your Azure AD B2C tenant.
+ 1. Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso.azurefd.net).
+ 1. Make sure the [Azure Front Door backend pool configuration](#set-up-your-custom-domain-on-azure-front-door) points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored.
+
+### Identify provider returns an error
+
+- **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity providers presents an error message.
+- **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI is not yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message.
+- **Resolution** - Follow the steps in [Configure your identity provider](#configure-your-identity-provider) to add the new redirect URI.
++
+## Frequently asked questions
+
+### Can I use Azure Front Door advanced configuration, such as *Web application firewall Rules*?
+
+While Azure Front Door advanced configuration settings are not officially supported, you can use them at your own risk.
+
+### When I use Run Now to try to run my policy, why I can't see the custom domain?
+
+Copy the URL, change the domain name manually, and then paste it back to your browser.
+
+### Which IP address is presented to Azure AD B2C? The user's IP address, or the Azure Front Door IP address?
+
+Azure Front Door passes the user's original IP address. This is the IP address that you'll see in the audit reporting or your custom policy.
+
+### Can I use a third-party wab application firewall (WAF) with B2C?
+
+Currently, Azure AD B2C supports a custom domain through the use of Azure Front Door only. Don't add another WAF in front of Azure Front Door.
++
+## Next steps
+
+Learn about [OAuth authorization requests](protocols-overview.md).
+
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 03/10/2021 Last updated : 03/15/2021
Add the following claims transformation to the `<ClaimsTransformations>` element
## Add DataUri content definition
-Below the claims transformations within `<BuildingBlocks>`, add the following [ContentDefinition](contentdefinitions.md) to reference the version 2.1.0 data URI:
+Below the claims transformations within `<BuildingBlocks>`, add the following [ContentDefinition](contentdefinitions.md) to reference the version 2.1.2 data URI:
```XML <!-- <BuildingBlocks> --> <ContentDefinitions> <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
</ContentDefinition> <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
</ContentDefinition> </ContentDefinitions> <!--
To localize the email, you must send localized strings to Mailjet, or your email
<BuildingBlocks> --> <ContentDefinitions> <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
<LocalizedResourcesReferences MergeBehavior="Prepend"> <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" /> <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" /> </LocalizedResourcesReferences> </ContentDefinition> <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
<LocalizedResourcesReferences MergeBehavior="Prepend"> <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" /> <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-sendgrid.md
Previously updated : 03/10/2021 Last updated : 03/15/2021
Add the following claims transformation to the `<ClaimsTransformations>` element
## Add DataUri content definition
-Below the claims transformations within `<BuildingBlocks>`, add the following [ContentDefinition](contentdefinitions.md) to reference the version 2.1.0 data URI:
+Below the claims transformations within `<BuildingBlocks>`, add the following [ContentDefinition](contentdefinitions.md) to reference the version 2.1.2 data URI:
```xml <!-- <BuildingBlocks> --> <ContentDefinitions> <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
</ContentDefinition> <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
</ContentDefinition> </ContentDefinitions> <!--
To localize the email, you must send localized strings to SendGrid, or your emai
<BuildingBlocks> --> <ContentDefinitions> <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
<LocalizedResourcesReferences MergeBehavior="Prepend"> <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" /> <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" /> </LocalizedResourcesReferences> </ContentDefinition> <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.2</DataUri>
<LocalizedResourcesReferences MergeBehavior="Prepend"> <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" /> <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
+
+ Title: Embed Azure Active Directory B2C user interface into your app with a custom policy
+
+description: Learn how to embed Azure Active Directory B2C user interface into your app with a custom policy
+++++++ Last updated : 03/15/2021++++
+# Embedded sign-in experience
+
+For a simpler sign-in experience, you can avoid redirecting users to a separate sign-in page or generating a pop-up window. By using the inline frame element `<iframe>`, you can embed the Azure AD B2C sign-in user interface directly into your web application.
+
+## Web application embedded sign-in
+
+The inline frame element `<iframe>` is used to embed a document in an HTML5 web page. You can use the iframe element to embed the Azure AD B2C sign-in user interface directly into your web application, as show in the following example:
+
+![Login with hovering DIV experience](media/embedded-login/login-hovering.png)
+
+When using iframe, consider the following:
+
+- Embedded sign-in supports local accounts only. Most social identity providers (for example, Google and Facebook) block their sign-in pages from being rendered in inline frames.
+- Because Azure AD B2C session cookies within an iframe are considered third-party cookies, certain browsers (for example Safari or Chrome in incognito mode) either block or clear these cookies, resulting in an undesirable user experience. To prevent this issue, make sure your application domain name and your Azure AD B2C domain have the *same origin*. For example, an application hosted on https://app.contoso.com has the same origin as Azure AD B2C running on https://login.contoso.com.
+
+## Configure your policy
+
+To allow your Azure AD B2C user interface to be embedded in an iframe, a content security policy `Content-Security-Policy` and frame options `X-Frame-Options` must be included in the Azure AD B2C HTTP response headers. These headers allow the Azure AD B2C user interface to run under your application domain name.
+
+Add a **JourneyFraming** element inside the [RelyingParty](relyingparty.md) element. The **UserJourneyBehaviors** element must follow the **DefaultUserJourney**. Your **UserJourneyBehaviors** element should look like this example:
+
+```xml
+<!--
+<RelyingParty>
+ <DefaultUserJourney ReferenceId="SignUpOrSignIn" /> -->
+ <UserJourneyBehaviors>
+ <JourneyFraming Enabled="true" Sources="https://somesite.com https://anothersite.com" />
+ </UserJourneyBehaviors>
+<!--
+</RelyingParty> -->
+```
+
+The **Sources** attribute contains the URI of your web application. Add a space between URIs. Each URI must meet the following requirements:
+
+- The URI must be trusted and owned by your application.
+- The URI must use the https scheme.
+- The full URI of the web app must be specified. Wildcards are not supported.
+
+In addition, we recommend that you also block your own domain name from being embedded in an iframe by setting the Content-Security-Policy and X-Frame-Options headers respectively on your application pages. This will mitigate security concerns around older browsers related to nested embedding of iframes.
+
+## Adjust policy user interface
+
+With Azure AD B2C [user interface customization](customize-ui.md), you have almost full control over the HTML and CSS content presented to users. Follow the steps for customizing an HTML page using content definitions. To fit the Azure AD B2C user interface into the iframe size, provide clean HTML page without background and extra spaces.
+
+The following CSS code hides the Azure AD B2C HTML elements and adjusts the size of the panel to fill the iframe.
+
+```css
+div.social, div.divider {
+ display: none;
+}
+
+div.api_container{
+ padding-top:0;
+}
+
+.panel {
+ width: 100%!important
+}
+```
+
+In some cases, you might want to notify to your application of which Azure AD B2C page is currently being presented. For example, when a user selects the sign-up option, you might want the application to respond by hiding the links for signing in with a social account or adjusting the iframe size.
+
+To notify your application of the current Azure AD B2C page, [enable your policy for JavaScript](javascript-samples.md), and then use HTML5 post messages. The following JavaScript code sends a post message to the app with `signUp`:
+
+```javascript
+window.parent.postMessage("signUp", '*');
+```
+
+## Configure a web application
+
+When a user selects the sign-in button, the [web app](code-samples.md#web-apps-and-apis) generates an authorization request that takes the user to Azure AD B2C sign-in experience. After sign-in is complete, Azure AD B2C returns an ID token, or authorization code, to the configured redirect URI within your application.
+
+To support embedded login, the iframe **src** property points to the sign-in controller, such as `/account/SignUpSignIn`, which generates the authorization request and redirects the user to Azure AD B2C policy.
+
+```html
+<iframe id="loginframe" frameborder="0" src="/account/SignUpSignIn"></iframe>
+```
+
+After the ID token is received and validated by the application, the authorization flow is complete and the application recognizes and trusts the user. Because the authorization flow happens inside the iframe, you need to reload the main page. After the page reloads, the sign-in button changes to "sign out" and the username is presented in the UI.
+
+The following is an example showing how the sign-in redirect URI can refresh the main page:
+
+```javascript
+window.top.location.reload();
+```
+
+### Add sign-in with social accounts to a web app
+
+Social identity providers block their sign-in pages from rendering in inline frames. You can use a separate policy for social accounts, or you can use a single policy for both sign-in and sign-up with local and social accounts. Then you can use the `domain_hint` query string parameter. The domain hint parameter takes the user directly to the social identity provider's sign-in page.
+
+In your application, add the sign-in with social account buttons. When a user clicks one of the social account buttons, the control needs to change the policy name or set the domain hint parameter.
+
+<!-- TBD: add a diagram -->
+
+The redirect URI can be the same redirect URI used by the iframe. You can skip the page reload.
+
+## Configure a single-page application
+
+For a single-page application, you'll also need to a second "sign-in" HTML page that loads into the iframe. This sign-in page hosts the authentication library code that generates the authorization code and returns the token.
+
+When the single-page application needs the access token, use JavaScript code to obtain the access token from the iframe and object that contains it.
+
+> [!NOTE]
+> Running MSAL 2.0 in an iframe is not currently supported.
+
+The following code is an example that runs on the main page and calls an iframe's JavaScript code:
+
+```javascript
+function getToken()
+{
+ var token = document.getElementById("loginframe").contentWindow.getToken("adB2CSignInSignUp");
+
+ if (token === "LoginIsRequired")
+ document.getElementById("tokenTextarea").value = "Please login!!!"
+ else
+ document.getElementById("tokenTextarea").value = token.access_token;
+}
+
+function logOut()
+{
+ document.getElementById("loginframe").contentWindow.policyLogout("adB2CSignInSignUp", "B2C_1A_SignUpOrSignIn");
+}
+```
+
+## Next steps
+
+See the following related articles:
+
+- [User interface customization](customize-ui.md)
+- [RelyingParty](relyingparty.md) element reference
+- [Enable your policy for JavaScript](javascript-samples.md)
+- [Code samples](code-samples.md)
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-adfs.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To use AD FS as an identity provider in Azure AD B2C, you need to create an AD F
https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile ```
+When using a [custom domain](custom-domain.md), use the following format:
+
+```
+https://your-domain-name/your-tenant-name.onmicrosoft.com/your-policy/samlp/metadata?idptp=your-technical-profile
+```
+ Replace the following values: -- **your-tenant** with your tenant name, such as your-tenant.onmicrosoft.com.
+- **your-tenant-name** with your tenant name, such as your-tenant.onmicrosoft.com.
+- **your-domain-name** with your custom domain name, such as login.contoso.com.
- **your-policy** with your policy name. For example, B2C_1A_signup_signin_adfs. - **your-technical-profile** with the name of your SAML identity provider technical profile. For example, Contoso-SAML2.
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-amazon.md
Previously updated : 03/08/2021 Last updated : 03/15/2021 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
To enable sign-in for users with an Amazon account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [Amazon Developer Services and Technologies](https://developer.amazon.com). For more information, see [Register for Login with Amazon](https://developer.amazon.com/docs/login-with-amazon/register-web.html). If you don't already have an Amazon account, you can sign up at [https://www.amazon.com/](https://www.amazon.com/).
-> [!NOTE]
-> Use the following URLs in **step 8** below, replacing `your-tenant-name` with the name of your tenant. When entering your tenant name, use all lowercase letters, even if the tenant is defined with uppercase letters in Azure AD B2C.
-> - For **Allowed Origins**, enter `https://your-tenant-name.b2clogin.com`
-> - For **Allowed Return URLs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`
-
+1. Sign in to the [Amazon Developer Console](https://developer.amazon.com/dashboard) with your Amazon account credentials.
+1. If you have not already done so, select **Sign Up**, follow the developer registration steps, and then accept the policy.
+1. From the Dashboard, select **Login with Amazon**.
+1. Select **Create a New Security Profile**.
+1. Enter a **Security Profile Name**, **Security Profile Description**, and **Consent Privacy Notice URL**, for example `https://www.contoso.com/privacy` The privacy notice URL is a page you manage that provides privacy information to users. Then click **Save**.
+1. In the **Login with Amazon Configurations** section, select the **Security Profile Name** you created, select the **Manage** icon, and then select **Web Settings**.
+1. In the **Web Settings** section, copy the values of **Client ID**. Select **Show Secret** to get the client secret, and then copy it. You need both values to configure an Amazon account as an identity provider in your tenant. **Client Secret** is an important security credential.
+1. In the **Web Settings** section, select **Edit**.
+ 1. In **Allowed Origins**, enter `https://your-tenant-name.b2clogin.com`. Replace `your-tenant-name` with the name of your tenant. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
+ 1. **Allowed Return URLs** , enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+1. Select **Save**.
::: zone pivot="b2c-user-flow"
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
Previously updated : 03/09/2021 Last updated : 03/15/2021
To enable sign-in for users with an Apple ID in Azure Active Directory B2C (Azur
1. From **Identifiers**, select the identifier you created. 1. Select **Sign In with Apple**, and then select **Configure**. 1. Select the **Primary App ID** you want to configure Sign in with Apple with.
- 1. In **Domains and Subdomains**, enter `your-tenant-name.b2clogin.com`. Replace your-tenant-name with the name of your tenant.
- 1. In **Return URLs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace your-tenant-name with the name of your tenant.
+ 1. In **Domains and Subdomains**, enter `your-tenant-name.b2clogin.com`. Replace your-tenant-name with the name of your tenant. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
+ 1. In **Return URLs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
1. Select **Next**, and then select **Done**. 1. When the pop-up window is closed, select **Continue**, and then select **Save**.
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To create an application.
For example, `https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/authresp`.
+ If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-domain-name` with your custom domain, and `your-tenant-name` with the name of your tenant.
+ 1. Under Permissions, select the **Grant admin consent to openid and offline_access permissions** check box. 1. Select **Register**. 1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *ContosoApp*.
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with an Azure AD account in Azure Active Directory B
For example, `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`.
+ If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-domain-name` with your custom domain, and `your-tenant-name` with the name of your tenant.
+ 1. Select **Register**. Record the **Application (client) ID** for use in a later step. 1. Select **Certificates & secrets**, and then select **New client secret**. 1. Enter a **Description** for the secret, select an expiration, and then select **Add**. Record the **Value** of the secret for use in a later step.
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with an Azure AD account from a specific Azure AD or
For example, `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`.
+ If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-domain-name` with your custom domain, and `your-tenant-name` with the name of your tenant.
+ 1. Select **Register**. Record the **Application (client) ID** for use in a later step. 1. Select **Certificates & secrets**, and then select **New client secret**. 1. Enter a **Description** for the secret, select an expiration, and then select **Add**. Record the **Value** of the secret for use in a later step.
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-facebook.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a Facebook account in Azure Active Directory B2
1. Select **Show** and copy the value of **App Secret**. You use both of them to configure Facebook as an identity provider in your tenant. **App Secret** is an important security credential. 1. From the menu, select the **plus** sign next to **PRODUCTS**. Under the **Add Products to Your App**, select **Set up** under **Facebook Login**. 1. From the menu, select **Facebook Login**, select **Settings**.
-1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant. Select **Save Changes** at the bottom of the page.
+1. In **Valid OAuth redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+1. Select **Save Changes** at the bottom of the page.
1. To make your Facebook application available to Azure AD B2C, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point, the Status should change from **Development** to **Live**. ::: zone pivot="b2c-user-flow"
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Previously updated : 03/04/2021 Last updated : 03/15/2021
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml.md
The following example shows a URL address for the SAML metadata of an Azure AD B
https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-policy>/samlp/metadata?idptp=<your-technical-profile> ```
+When using a [custom domain](custom-domain.md), use the following format:
+
+```
+https://your-domain-name/<your-tenant-name>.onmicrosoft.com/<your-policy>/samlp/metadata?idptp=<your-technical-profile>
+```
+ Replace the following values: -- **your-tenant** with your tenant name, such as your-tenant.onmicrosoft.com.
+- **your-tenant-name** with your tenant name, such as your-tenant.onmicrosoft.com.
+- **your-domain-name** with your custom domain name, such as login.contoso.com.
- **your-policy** with your policy name. For example, B2C_1A_signup_signin_adfs. - **your-technical-profile** with the name of your SAML identity provider technical profile. For example, Contoso-SAML2.
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-github.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in with a GitHub account in Azure Active Directory B2C (Azure AD
1. Sign in to the [GitHub Developer](https://github.com/settings/developers) with your GitHub credentials. 1. Select **OAuth Apps** and then select **New OAuth App**. 1. Enter an **Application name** and your **Homepage URL**.
-1. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorization callback URL**. Replace `your-tenant-name` with the name of your Azure AD B2C tenant. Use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
+1. For the **Authorization callback URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-domain-name` with your custom domain, and `your-tenant-name` with the name of your tenant. Use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
1. Click **Register application**. 1. Copy the values of **Client ID** and **Client Secret**. You need both to add the identity provider to your tenant.
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-google.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a Google account in Azure Active Directory B2C
Enter a **Name** for your application. Enter *b2clogin.com* in the **Authorized domains** section and select **Save**. 1. Select **Credentials** in the left menu, and then select **Create credentials** > **Oauth client ID**. 1. Under **Application type**, select **Web application**.
-1. Enter a **Name** for your application, enter `https://your-tenant-name.b2clogin.com` in **Authorized JavaScript origins**, and `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **Authorized redirect URIs**. Replace `your-tenant-name` with the name of your tenant. Use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
+ 1. Enter a **Name** for your application.
+ 1. For the **Authorized JavaScript origins**, enter `https://your-tenant-name.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
+ 1. For the **Authorized redirect URIs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-domain-name` with your custom domain, and `your-tenant-name` with the name of your tenant. Use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
1. Click **Create**. 1. Copy the values of **Client ID** and **Client secret**. You will need both of them to configure Google as an identity provider in your tenant. **Client secret** is an important security credential.
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-id-me.md
Previously updated : 03/08/2021 Last updated : 03/15/2021 zone_pivot_groups: b2c-policy-type
To enable sign-in for users with an ID.me account in Azure Active Directory B2C
1. Select **View My Applications**, and select **Continue**. 1. Select **Create new** 1. Enter a **Name**, and **Display Name**.
- 1. In the **Redirect URI**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant.
+ 1. In the **Redirect URI**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
1. Click **Continue**. 1. Copy the values of **Client ID** and **Client Secret**. You need both to add the identity provider to your tenant.
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-linkedin.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
1. Enter **App name**, **LinkedIn Page**, **Privacy policy URL**, and **App logo**. 1. Agree to the LinkedIn **API Terms of Use** and click **Create app**. 1. Select the **Auth** tab. Under **Authentication Keys**, copy the values for **Client ID** and **Client Secret**. You'll need both of them to configure LinkedIn as an identity provider in your tenant. **Client Secret** is an important security credential.
-1. Select the edit pencil next to **Authorized redirect URLs for your app**, and then select **Add redirect URL**. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`, replacing `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C. Select **Update**.
-2. By default, your LinkedIn app isn't approved for scopes related to sign in. To request a review, select the **Products** tab, and then select **Sign In with LinkedIn**. When the review is complete, the required scopes will be added to your application.
+1. Select the edit pencil next to **Authorized redirect URLs for your app**, and then select **Add redirect URL**. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C. Select **Update**.
+1. By default, your LinkedIn app isn't approved for scopes related to sign in. To request a review, select the **Products** tab, and then select **Sign In with LinkedIn**. When the review is complete, the required scopes will be added to your application.
> [!NOTE] > You can view the scopes that are currently allowed for your app on the **Auth** tab in the **OAuth 2.0 scopes** section.
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-microsoft-account.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a Microsoft account in Azure Active Directory B
1. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. For more information on the different account type selections, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
-1. Under **Redirect URI (optional)**, select **Web** and enter `https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/oauth2/authresp` in the text box. Replace `<tenant-name>` with your Azure AD B2C tenant name.
+1. Under **Redirect URI (optional)**, select **Web** and enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
1. Select **Register** 1. Record the **Application (client) ID** shown on the application Overview page. You need the client ID when you configure the identity provider in the next section. 1. Select **Certificates & secrets**
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-qq.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a QQ account in Azure Active Directory B2C (Azu
1. Go to [https://connect.qq.com/https://docsupdatetracker.net/index.html](https://connect.qq.com/https://docsupdatetracker.net/index.html). 1. Select **应用管理** (app management). 1. Select **创建应用** (create app) and enter the required information.
-1. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name}.onmicrosoft.com/oauth2/authresp` in **授权回调域** (callback URL). For example, if your `tenant_name` is contoso, set the URL to be `https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/authresp`.
+1. For the **授权回调域** (callback URL), enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
1. Select **创建应用** (create app). 1. On the confirmation page, select **应用管理** (app management) to return to the app management page. 1. Select **查看** (view) next to the app you created.
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce-saml.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
This article shows you how to enable sign-in for users from a Salesforce organiz
https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase ```
+ When using a [custom domain](custom-domain.md), use the following format:
+
+ ```
+ https://your-domain-name/your-tenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase
+ ```
+ 6. In the **ACS URL** field, enter the following URL. Make sure that you replace the value for `your-tenant` with the name of your Azure AD B2C tenant. ``` https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase/samlp/sso/assertionconsumer ```+
+ When using a [custom domain](custom-domain.md), use the following format:
+
+ ```
+ https://your-domain-name/your-tenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase/samlp/sso/assertionconsumer
+ ```
+ 7. Scroll to the bottom of the list, and then click **Save**. ### Get the metadata URL
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a Salesforce account in Azure Active Directory
1. **API Name** 1. **Contact Email** - The contact email for Salesforce 1. Under **API (Enable OAuth Settings)**, select **Enable OAuth Settings**
- 1. In **Callback URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
+ 1. For the **Callback URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
1. In the **Selected OAuth Scopes**, select **Access your basic information (id, profile, email, address, phone)**, and **Allow access to your unique identifier (openid)**. 1. Select **Require Secret for Web Server Flow**. 1. Select **Configure ID Token**
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-twitter.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. Under **Authentication settings**, select **Edit** 1. Select **Enable 3-legged OAuth** checkbox. 1. Select **Request email address from users** checkbox.
- 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Replace `your-tenant` with the name of your tenant name and `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`. Use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C.
- 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`.
+ 1. For the **Callback URLs**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow id even if they are defined with uppercase letters in Azure AD B2C. Replace:
+ - `your-tenant-name` with the name of your tenant name.
+ - `your-domain-name` with your custom domain.
+ - `your-user-flow-Id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
+
+ 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application. 1. Select **Save**.
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-wechat.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a WeChat account in Azure Active Directory B2C
1. Sign in to [https://open.weixin.qq.com/](https://open.weixin.qq.com/) with your WeChat credentials. 1. Select **管理中心** (management center). 1. Follow the steps to register a new application.
-1. Enter `https://your-tenant_name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` in **授权回调域** (callback URL). For example, if your tenant name is contoso, set the URL to be `https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/authresp`.
+1. For the **授权回调域** (callback URL), enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
1. Copy the **APP ID** and **APP KEY**. You need both of them to configure the identity provider to your tenant. ::: zone pivot="b2c-user-flow"
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-weibo.md
Previously updated : 03/08/2021 Last updated : 03/15/2021
To enable sign-in for users with a Weibo account in Azure Active Directory B2C (
1. Select **保存以上信息** (save). 1. Select **高级信息** (advanced information). 1. Select **编辑** (edit) next to the field for OAuth2.0 **授权设置** (redirect URL).
-1. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp` for OAuth2.0 **授权设置** (redirect URL). For example, if your tenant name is contoso, set the URL to be `https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/authresp`.
+1. For the OAuth2.0 **授权设置** (redirect URL), enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
1. Select **提交** (submit). ::: zone pivot="b2c-user-flow"
active-directory-b2c Multiple Token Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multiple-token-endpoints.md
Title: Migrate OWIN-based web APIs to b2clogin.com
+ Title: Migrate OWIN-based web APIs to b2clogin.com or a custom domain
description: Learn how to enable a .NET web API to support tokens issued by multiple token issuers while you migrate your applications to b2clogin.com.
Previously updated : 07/31/2019 Last updated : 03/15/2021
-# Migrate an OWIN-based web API to b2clogin.com
+# Migrate an OWIN-based web API to b2clogin.com or a custom domain
-This article describes a technique for enabling support for multiple token issuers in web APIs that implement the [Open Web Interface for .NET (OWIN)](http://owin.org/). Supporting multiple token endpoints is useful when you're migrating Azure Active Directory B2C (Azure AD B2C) APIs and their applications from *login.microsoftonline.com* to *b2clogin.com*.
+This article describes a technique for enabling support for multiple token issuers in web APIs that implement the [Open Web Interface for .NET (OWIN)](http://owin.org/). Supporting multiple token endpoints is useful when you're migrating Azure Active Directory B2C (Azure AD B2C) APIs and their applications from one domain to another. For example, from *login.microsoftonline.com* to *b2clogin.com*, or to a [custom domain](custom-domain.md).
-By adding support in your API for accepting tokens issued by both b2clogin.com and login.microsoftonline.com, you can migrate your web applications in a staged manner before removing support for login.microsoftonline.com-issued tokens from the API.
+By adding support in your API for accepting tokens issued by b2clogin.com, login.microsoftonline.com, or a custom domain, you can migrate your web applications in a staged manner before removing support for login.microsoftonline.com-issued tokens from the API.
The following sections present an example of how to enable multiple issuers in a web API that uses the [Microsoft OWIN][katana] middleware components (Katana). Although the code examples are specific to the Microsoft OWIN middleware, the general technique should be applicable to other OWIN libraries.
-> [!NOTE]
-> This article is intended for Azure AD B2C customers with currently deployed APIs and applications that reference `login.microsoftonline.com` and who want to migrate to the recommended `b2clogin.com` endpoint. If you're setting up a new application, use [b2clogin.com](b2clogin.md) as directed.
- ## Prerequisites You need the following Azure AD B2C resources in place before continuing with the steps in this article:
In this section, you update the code to specify that both token issuer endpoints
AuthenticationType = Startup.DefaultPolicy, ValidIssuers = new List<string> { "https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0/",
- "https://{your-b2c-tenant}.b2clogin.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0/"
+ "https://{your-b2c-tenant}.b2clogin.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0/"//,
+ //"https://your-custom-domain/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/v2.0/"
} }; ```
After (replace `{your-b2c-tenant}` with the name of your B2C tenant):
When the endpoint strings are constructed during execution of the web app, the b2clogin.com-based endpoints are used when it requests tokens.
+When using custom domain:
+
+```xml
+<!-- Custom domain -->
+<add key="ida:AadInstance" value="https://custom-domain/{0}/{1}" />
+```
+ ## Next steps This article presented a method of configuring a web API implementing the Microsoft OWIN middleware (Katana) to accept tokens from multiple issuer endpoints. As you might notice, there are several other strings in the *Web.Config* files of both the TaskService and TaskWebApp projects that would need to be changed if you want to build and run these projects against your own tenant. You're welcome to modify the projects appropriately if you want to see them in action, however, a full walk-through of doing so is outside the scope of this article.
For more information about the different types of security tokens emitted by Azu
<!-- LINKS - Internal --> [katana]: /aspnet/aspnet/overview/owin-and-katana/ [validissuers]: /dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters.validissuers
-[tokenvalidationparameters]: /dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters
+[tokenvalidationparameters]: /dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect.md
Previously updated : 03/10/2021 Last updated : 03/15/2021
OpenID Connect is an authentication protocol, built on top of OAuth 2.0, that can be used to securely sign users in to web applications. By using the Azure Active Directory B2C (Azure AD B2C) implementation of OpenID Connect, you can outsource sign-up, sign-in, and other identity management experiences in your web applications to Azure Active Directory (Azure AD). This guide shows you how to do so in a language-independent manner. It describes how to send and receive HTTP messages without using any of our open-source libraries.
+> [!NOTE]
+> Most of the open-source authentication libraries acquire and validate the JWT tokens for your application. We recommend exploring those options, rather than implementing your own code. For more information, see [Overview of the Microsoft Authentication Library (MSAL)](https://docs.microsoft.com/azure/active-directory/develop/msal-overview), and [Microsoft Identity Web authentication library](https://docs.microsoft.com/azure/active-directory/develop/microsoft-identity-web).
+ [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html) extends the OAuth 2.0 *authorization* protocol for use as an *authentication* protocol. This authentication protocol allows you to perform single sign-on. It introduces the concept of an *ID token*, which allows the client to verify the identity of the user and obtain basic profile information about the user. Because it extends OAuth 2.0, it also enables applications to securely acquire *access tokens*. You can use access tokens to access resources that are secured by an [authorization server](protocols-overview.md). OpenID Connect is recommended if you're building a web application that's hosted on a server and accessed through a browser. For more information about tokens, see the [Overview of tokens in Azure Active Directory B2C](tokens-overview.md)
error=access_denied
## Validate the ID token
-Just receiving an ID token is not enough to authenticate the user. Validate the ID token's signature and verify the claims in the token per your application's requirements. Azure AD B2C uses [JSON Web Tokens (JWTs)](https://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) and public key cryptography to sign tokens and verify that they are valid. There are many open-source libraries that are available for validating JWTs, depending on your language of preference. We recommend exploring those options rather than implementing your own validation logic.
+Just receiving an ID token is not enough to authenticate the user. Validate the ID token's signature and verify the claims in the token per your application's requirements. Azure AD B2C uses [JSON Web Tokens (JWTs)](https://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) and public key cryptography to sign tokens and verify that they are valid.
+
+> [!NOTE]
+> Most of the open-source authentication libraries validate the JWT tokens for your application. We recommend exploring those options, rather than implementing your own validation logic. For more information, see [Overview of the Microsoft Authentication Library (MSAL)](https://docs.microsoft.com/azure/active-directory/develop/msal-overview), and [Microsoft Identity Web authentication library](https://docs.microsoft.com/azure/active-directory/develop/microsoft-identity-web).
Azure AD B2C has an OpenID Connect metadata endpoint, which allows an application to get information about Azure AD B2C at runtime. This information includes endpoints, token contents, and token signing keys. There is a JSON metadata document for each user flow in your B2C tenant. For example, the metadata document for the `b2c_1_sign_in` user flow in `fabrikamb2c.onmicrosoft.com` is located at:
There are also several more validations that you should perform. The validations
- Ensuring that the user has proper authorization/privileges. - Ensuring that a certain strength of authentication has occurred, such as Azure AD Multi-Factor Authentication.
-After you validate the ID token, you can begin a session with the user. You can use the claims in the ID token to obtain information about the user in your application. Uses for this information include display, records, and authorization.
+After the ID token is validated, you can begin a session with the user. You can use the claims in the ID token to obtain information about the user in your application. Uses for this information include display, records, and authorization.
## Get a token
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-saviynt.md
The following architecture diagram shows the implementation.
1. To create a Saviynt account, contact [Saviynt](https://saviynt.com/contact-us/)
-2. Create delegated administration policies and assign users as [delegated administrators](../active-directory/roles/concept-delegation.md) with various roles.
+2. Create delegated administration policies and assign users as delegated administrators with various roles.
## Configure Azure AD B2C with Saviynt
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/relyingparty.md
Previously updated : 03/04/2021 Last updated : 03/15/2021
The **UserJourneyBehaviors** element contains the following elements:
| JourneyInsights | 0:1 | The Azure Application Insights instrumentation key to be used. | | ContentDefinitionParameters | 0:1 | The list of key value pairs to be appended to the content definition load URI. | |ScriptExecution| 0:1| The supported [JavaScript](javascript-and-page-layout.md) execution modes. Possible values: `Allow` or `Disallow` (default).
+| JourneyFraming | 0:1| Allows the user interface of this policy to be loaded in an iframe. |
### SingleSignOn
-The **SingleSignOn** element contains in the following attribute:
+The **SingleSignOn** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
The **JourneyInsights** element contains the following attributes:
| | -- | -- | | TelemetryEngine | Yes | The value must be `ApplicationInsights`. | | InstrumentationKey | Yes | The string that contains the instrumentation key for the application insights element. |
-| DeveloperMode | Yes | Possible values: `true` or `false`. If `true`, Application Insights expedites the telemetry through the processing pipeline. This setting is good for development, but constrained at high volumes. The detailed activity logs are designed only to aid in development of custom policies. Do not use development mode in production. Logs collect all claims sent to and from the identity providers during development. If used in production, the developer assumes responsibility for PII (Privately Identifiable Information) collected in the App Insights log that they own. These detailed logs are only collected when this value is set to `true`.|
+| DeveloperMode | Yes | Possible values: `true` or `false`. If `true`, Application Insights expedites the telemetry through the processing pipeline. This setting is good for development, but constrained at high volumes. The detailed activity logs are designed only to aid in development of custom policies. Do not use development mode in production. Logs collect all claims sent to and from the identity providers during development. If used in production, the developer assumes responsibility for personal data collected in the App Insights log that they own. These detailed logs are only collected when this value is set to `true`.|
| ClientEnabled | Yes | Possible values: `true` or `false`. If `true`, sends the Application Insights client-side script for tracking page view and client-side errors. | | ServerEnabled | Yes | Possible values: `true` or `false`. If `true`, sends the existing UserJourneyRecorder JSON as a custom event to Application Insights. | | TelemetryVersion | Yes | The value must be `1.0.0`. |
The **ContentDefinitionParameter** element contains the following attribute:
For more information, see [Configure the UI with dynamic content by using custom policies](customize-ui-with-html.md#configure-dynamic-custom-page-content-uri)
+### JourneyFraming
+
+The **JourneyFraming** element contains the following attributes:
+
+| Attribute | Required | Description |
+| | -- | -- |
+| Enabled | Yes | Enables this policy to be loaded within an iframe. Possible values: `false` (default), or `true`. |
+| Sources | Yes | Contains the domains that will load host the iframe. For more information, see [Loading Azure B2C in an iframe](embedded-login.md). |
+ ## TechnicalProfile The **TechnicalProfile** element contains the following attribute:
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-service-provider-options.md
Previously updated : 03/04/2021 Last updated : 03/15/2021
Example:
You can manage the session between Azure AD B2C and the SAML relying party application using the `UseTechnicalProfileForSessionManagement` element and the [SamlSSOSessionProvider](custom-policy-reference-sso.md#samlssosessionprovider).
+## Force users to re-authenticate
+
+To force users to re-authenticate, the application can include the `ForceAuthn` attribute in the SAML authentication request. The `ForceAuthn` attribute is a Boolean value. When set to true, the users session will be invalidated at Azure AD B2C, and the user is forced to re-authenticate. The following SAML authentication request demonstrates how to set the `ForceAuthn` attribute to true.
++
+```xml
+<samlp:AuthnRequest
+ Destination="https://contoso.b2clogin.com/contoso.onmicrosoft.com/B2C_1A_SAML2_signup_signin/samlp/sso/login"
+ ForceAuthn="true" ...>
+ ...
+</samlp:AuthnRequest>
+```
+ ## Debug the SAML protocol To help configure and debug the integration with your service provider, you can use a browser extension for the SAML protocol, for example, [SAML DevTools extension](https://chrome.google.com/webstore/detail/saml-devtools-extension/jndllhgbinhiiddokbeoeepbppdnhhio) for Chrome, [SAML-tracer](https://addons.mozilla.org/es/firefox/addon/saml-tracer/) for FireFox, or [Edge or IE Developer tools](https://techcommunity.microsoft.com/t5/microsoft-sharepoint-blog/gathering-a-saml-token-using-edge-or-ie-developer-tools/ba-p/320957).
active-directory-b2c Trustframeworkpolicy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/trustframeworkpolicy.md
Previously updated : 01/31/2020 Last updated : 03/15/2021
A custom policy is represented as one or more XML-formatted files, which refer t
xmlns:xsd="https://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06" PolicySchemaVersion="0.3.0.0"
- TenantId="mytenant.onmicrosoft.com"
+ TenantId="yourtenant.onmicrosoft.com"
PolicyId="B2C_1A_TrustFrameworkBase"
- PublicPolicyUri="http://mytenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase">
+ PublicPolicyUri="http://yourtenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase">
... ```
The following example shows how to specify the **TrustFrameworkPolicy** element:
xmlns:xsd="https://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06" PolicySchemaVersion="0.3.0.0"
- TenantId="mytenant.onmicrosoft.com"
+ TenantId="yourtenant.onmicrosoft.com"
PolicyId="B2C_1A_TrustFrameworkBase"
- PublicPolicyUri="http://mytenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase">
+ PublicPolicyUri="http://yourtenant.onmicrosoft.com/B2C_1A_TrustFrameworkBase">
``` The **TrustFrameworkPolicy** element contains the following elements:
The following example shows how to specify a base policy. This **B2C_1A_TrustFra
xmlns:xsd="https://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06" PolicySchemaVersion="0.3.0.0"
- TenantId="mytenant.onmicrosoft.com"
+ TenantId="yourtenant.onmicrosoft.com"
PolicyId="B2C_1A_TrustFrameworkExtensions"
- PublicPolicyUri="http://mytenant.onmicrosoft.com/B2C_1A_TrustFrameworkExtensions">
+ PublicPolicyUri="http://yourtenant.onmicrosoft.com/B2C_1A_TrustFrameworkExtensions">
<BasePolicy> <TenantId>yourtenant.onmicrosoft.com</TenantId>
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-methods.md
Previously updated : 02/22/2021 Last updated : 03/15/2021
The following table outlines the security considerations for the available authe
| Windows Hello for Business | High | High | High | | Microsoft Authenticator app | High | High | High | | FIDO2 security key | High | High | High |
-| OATH hardware tokens | Medium | Medium | High |
+| OATH hardware tokens (preview) | Medium | Medium | High |
| OATH software tokens | Medium | Medium | High | | SMS | Medium | High | Medium | | Voice | Medium | Medium | Medium |
The following table outlines when an authentication method can be used during a
| Windows Hello for Business | Yes | MFA | | Microsoft Authenticator app | Yes | MFA and SSPR | | FIDO2 security key | Yes | MFA |
-| OATH hardware tokens | No | MFA |
+| OATH hardware tokens (preview) | No | MFA |
| OATH software tokens | No | MFA | | SMS | Yes | MFA and SSPR | | Voice call | No | MFA and SSPR |
To learn more about how each authentication method works, see the following sepa
* [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) * [Microsoft Authenticator app](concept-authentication-authenticator-app.md) * [FIDO2 security key](concept-authentication-passwordless.md#fido2-security-keys)
-* [OATH hardware tokens](concept-authentication-oath-tokens.md#oath-hardware-tokens)
+* [OATH hardware tokens (preview)](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview)
* [OATH software tokens](concept-authentication-oath-tokens.md#oath-software-tokens) * [SMS sign-in](howto-authentication-sms-signin.md) and [verification](concept-authentication-phone-options.md#mobile-phone-verification) * [Voice call verification](concept-authentication-phone-options.md)
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 02/22/2021 Last updated : 03/15/2021
# Customer intent: As an identity administrator, I want to understand how to use OATH tokens in Azure AD to improve and secure user sign-in events.
-# Authentication methods in Azure Active Directory - OATH tokens
+# Authentication methods in Azure Active Directory - OATH tokens
OATH TOTP (Time-based One Time Password) is an open standard that specifies how one-time password (OTP) codes are generated. OATH TOTP can be implemented using either software or hardware to generate the codes. Azure AD doesn't support OATH HOTP, a different code generation standard.
The Authenticator app automatically generates codes when set up to do push notif
Some OATH TOTP hardware tokens are programmable, meaning they don't come with a secret key or seed pre-programmed. These programmable hardware tokens can be set up using the secret key or seed obtained from the software token setup flow. Customers can purchase these tokens from the vendor of their choice and use the secret key or seed in their vendor's setup process.
-## OATH hardware tokens
+## OATH hardware tokens (Preview)
Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ![Uploading OATH tokens to the MFA OATH tokens blade](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png) Once tokens are acquired they must be uploaded in a comma-separated values (CSV) file format including the UPN, serial number, secret key, time interval, manufacturer, and model, as shown in the following example:
active-directory Howto Authentication Sms Signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-sms-signin.md
Previously updated : 01/21/2021- Last updated : 03/15/2021
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * You need *global administrator* privileges in your Azure AD tenant to enable SMS-based authentication. * Each user that's enabled in the text message authentication method policy must be licensed, even if they don't use it. Each enabled user must have one of the following Azure AD, EMS, Microsoft 365 licenses:
- * [Azure AD Premium P1 or P2][azuread-licensing]
* [Microsoft 365 (M365) F1 or F3][m365-firstline-workers-licensing] * [Enterprise Mobility + Security (EMS) E3 or E5][ems-licensing] or [Microsoft 365 (M365) E3 or E5][m365-licensing]
+ * [Office 365 F3][o365-f3]
## Limitations
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 02/22/2021 Last updated : 03/15/2021
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program
Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow.
+OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+ ![Uploading OATH tokens to the MFA OATH tokens blade](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png) Once tokens are acquired they must be uploaded in a comma-separated values (CSV) file format including the UPN, serial number, secret key, time interval, manufacturer, and model as shown in the following example:
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
In addition to the Microsoft apps, administrators can add any Azure AD registere
## User actions
-User actions are tasks that can be performed by a user. The only currently supported action is **Register security information**, which allows Conditional Access policy to enforce when users who are enabled for combined registration attempt to register their security information. More information can be found in the article, [Combined security information registration](../authentication/concept-registration-mfa-sspr-combined.md).
+User actions are tasks that can be performed by a user. Currently, Conditional Access supports two user actions:
+- **Register security information**: This user action allows Conditional Access policy to enforce when users who are enabled for combined registration attempt to register their security information. More information can be found in the article, [Combined security information registration](../authentication/concept-registration-mfa-sspr-combined.md).
+
+- **Register or join devices (preview)**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. There are two key considerations with this user action:
+ - `Require multi-factor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration.
+ - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, Conditional Access policy with this user action is not properly enforced. More information regarding this device setting can found in [Configure device settings](../device-management-azure-portal.md##configure-device-settings). This user action provides flexibility to require multi-factor authentication for registering or joining devices for specific users and groups or conditions instead of having a tenant-wide policy in Device settings.
+
## Next steps - [Conditional Access: Conditions](concept-conditional-access-conditions.md)
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-android-shared-devices.md
# Shared device mode for Android devices
->[!IMPORTANT]
-> This feature [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
- Firstline Workers such as retail associates, flight crew members, and field service workers often use a shared mobile device to do their work. That becomes problematic when they start sharing passwords or pin numbers to access customer and business data on the shared device. Shared device mode allows you to configure an Android device so that it can be easily shared by multiple employees. Employees can sign in and access customer information quickly. When they are finished with their shift or task, they can sign out of the device and it will be immediately ready for the next employee to use.
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
In addition to these three accounts used to run Azure AD Connect, you will also
- **AD DS Enterprise Administrator account**: Optionally used to create the ΓÇ£AD DS Connector accountΓÇ¥ above. -- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD. You can view global administrator accounts in the azure portal. See [View Roles](../../active-directory/roles/manage-roles-portal.md#view-all-roles).
+- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD. You can view global administrator accounts in the Azure portal. See [List Azure AD role assignments](../../active-directory/roles/view-assignments.md).
- **SQL SA account (optional)**: used to create the ADSync database when using the full version of SQL Server. This SQL Server may be local or remote to the Azure AD Connect installation. This account may be the same account as the Enterprise Administrator. Provisioning the database can now be performed out of band by the SQL administrator and then installed by the Azure AD Connect administrator with database owner rights. For information on this see [Install Azure AD Connect using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md)
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-network-topology.md
When you sign up for an Azure AD tenant, the region of your tenant is determined
For example, if your Azure AD tenant's country or region is the United Kingdom, all your Application Proxy connectors at **default** will be assigned to use service instances in European data centers. When your users access published applications, their traffic goes through the Application Proxy cloud service instances in this location.
-If you have connectors installed in regions different from your default region, it may be beneficial to change which region your connector group is optimized for to improve performance accessing these applications. Once a region is specified for a connector group it will connected to Application Proxy cloud services in the designated region.
+If you have connectors installed in regions different from your default region, it may be beneficial to change which region your connector group is optimized for to improve performance accessing these applications. Once a region is specified for a connector group it will connect to Application Proxy cloud services in the designated region.
In order to optimize the traffic flow and reduce latency to a connector group assign the connector group to the closest region. To assign a region:
The connector can be placed in the Azure datacenter. Since the connector still h
**Scenario:** The app is in an organization's network in Europe, default tenant region is US, with most users in the Europe.
-**Recommendation:** Place the connector near the app. Update the connector group so it is optimized to use Europe Application Proxy service instances. For steps see, [Optimize connector groups to use closest Application Proxy cloud service](application-proxy-network-topology#Optimize connector-groups-to-use-closest-Application-Proxy-cloud-service).
+**Recommendation:** Place the connector near the app. Update the connector group so it is optimized to use Europe Application Proxy service instances. For steps see, [Optimize connector groups to use closest Application Proxy cloud service](application-proxy-network-topology.md#optimize-connector-groups-to-use-closest-application-proxy-cloud-service-preview).
Because Europe users are accessing an Application Proxy instance that happens to be in the same region, hop 1 is not expensive. Hop 3 is optimized. Consider using ExpressRoute to optimize hop 2.
Because Europe users are accessing an Application Proxy instance that happens to
**Scenario:** The app is in an organization's network in Europe, default tenant region is US, with most users in the US.
-**Recommendation:** Place the connector near the app. Update the connector group so it is optimized to use Europe Application Proxy service instances. For steps see, [Optimize connector groups to use closest Application Proxy cloud service](/application-proxy-network-topology#Optimize connector-groups-to-use-closest-Application-Proxy-cloud-service). Hop 1 can be more expensive since all US users must access the Application Proxy instance in Europe.
+**Recommendation:** Place the connector near the app. Update the connector group so it is optimized to use Europe Application Proxy service instances. For steps see, [Optimize connector groups to use closest Application Proxy cloud service](application-proxy-network-topology.md#optimize-connector-groups-to-use-closest-application-proxy-cloud-service-preview). Hop 1 can be more expensive since all US users must access the Application Proxy instance in Europe.
You can also consider using one other variant in this situation. If most users in the organization are in the US, then chances are that your network extends to the US as well. Place the connector in the US, continue to use the default US region for your connector groups, and use the dedicated internal corporate network line to the application in Europe. This way hops 2 and 3 are optimized.
active-directory Concept Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/concept-delegation.md
- Title: Understand admin role delegation - Azure Active Directory | Microsoft Docs
-description: Delegation models, examples, and role security in Azure Active Directory
-------- Previously updated : 11/05/2020---
-#As an Azure AD administrator, I want to know how to organize my approach to delegating roles
---
-# Delegate administration in Azure Active Directory
-
-With organizational growth comes complexity. One common response is to reduce some of the workload of access management with Azure Active Directory (AD) admin roles. You can assign the least possible privilege to users to access their apps and perform their tasks. Even if you don't assign the Global Administrator role to every application owner, you're placing application management responsibilities on the existing Global Administrators. There are many reasons for an organization move toward a more decentralized administration. This article can help you plan for delegation in your organization.
-
-<!--What about reporting? Who has which role and how do I audit?-->
-
-## Centralized versus delegated permissions
-
-As an organization grows, it can be difficult to keep track of which users have specific admin roles. If an employee has administrator rights they shouldnΓÇÖt, your organization can be more susceptible to security breaches. Generally, how many administrators you support and how granular their permissions are depends on the size and complexity of your deployment.
-
-* In small or proof-of-concept deployments, one or a few administrators do everything; there's no delegation. In this case, create each administrator with the Global Administrator role.
-* In larger deployments with more machines, applications, and desktops, more delegation is needed. Several administrators might have more specific functional responsibilities (roles). For example, some might be Privileged Identity Administrators, and others might be Application Administrators. Additionally, an administrator might manage only certain groups of objects such as devices.
-* Even larger deployments might require even more granular permissions, plus possibly administrators with unconventional or hybrid roles.
-
-In the Azure AD portal, you can [view all the members of any role](manage-roles-portal.md), which can help you quickly check your deployment and delegate permissions.
-
-If youΓÇÖre interested in delegating access to Azure resources instead of administrative access in Azure AD, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-## Delegation planning
-
-It's work to develop a delegation model that fits your needs. Developing a delegation model is an iterative design process, and we suggest you follow these steps:
-
-* Define the roles you need
-* Delegate app administration
-* Grant the ability to register applications
-* Delegate app ownership
-* Develop a security plan
-* Establish emergency accounts
-* Secure your administrator roles
-* Make privileged elevation temporary
-
-## Define roles
-
-Determine the Active Directory tasks that are carried out by administrators and how they map to roles. You can [view detailed role descriptions](manage-roles-portal.md) in the Azure portal.
-
-Each task should be evaluated for frequency, importance, and difficulty. These criteria are vital aspects of task definition because they govern whether a permission should be delegated:
-
-* Tasks that you do routinely, have limited risk, and are trivial to complete are excellent candidates for delegation.
-* Tasks that you do rarely but have great impact across the organization and require high skill levels should be considered very carefully before delegating. Instead, you can [temporarily elevate an account to the required role](../privileged-identity-management/pim-configure.md) or reassign the task.
-
-## Delegate app administration
-
-The proliferation of apps within your organization can strain your delegation model. If it places the burden for application access management on the Global Administrator, it's likely that model increases its overhead as time goes on. If you have granted people the Global Administrator role for things like configuring enterprise applications, you can now offload them to the following less-privileged roles. Doing so helps to improve your security posture and reduces the potential for unfortunate mistakes. The most-privileged application administrator roles are:
-
-* The **Application Administrator** role, which grants the ability to manage all applications in the directory, including registrations, single sign-on settings, user and group assignments and licensing, Application Proxy settings, and consent. It doesn't grant the ability to manage Conditional Access.
-* The **Cloud Application Administrator** role, which grants all the abilities of the Application Administrator, except it doesn't grant access to Application Proxy settings (because it has no on-premises permission).
-
-## Delegate app registration
-
-By default, all users can create application registrations. To selectively grant the ability to create application registrations:
-
-* Set **Users can register applications** to No in **User settings**
-* Assign the user to the Application Developer role
-
-To selectively grant the ability to consent to allow an application to access data:
-
-* Set **Users can consent to applications accessing company data on their behalf** To No in **User settings**
-* Assign the user to the Application Developer role
-
-When an Application Developer creates a new application registration, they are automatically added as the first owner.
-
-## Delegate app ownership
-
-For even finer-grained app access delegation, you can assign ownership to individual enterprise applications. This complements the existing support for assigning application registration owners. Ownership is assigned on a per-enterprise application basis in the Enterprise Applications blade. The benefit is owners can manage only the enterprise applications they own. For example, you can assign an owner for the Salesforce application, and that owner can manage access to and configuration for Salesforce, and no other applications. An enterprise application can have many owners, and a user can be the owner for many enterprise applications. There are two app owner roles:
-
-* The **Enterprise Application Owner** role grants the ability to manage the ΓÇÿenterprise applications that the user owns, including single sign-on settings, user and group assignments, and adding additional owners. It doesn't grant the ability to manage Application Proxy settings or Conditional Access.
-* The **Application Registration Owner** role grants the ability to manage application registrations for app that the user owns, including the application manifest and adding additional owners.
-
-## Develop a security plan
-
-Azure AD provides an extensive guide to planning and executing a security plan on your Azure AD admin roles, [Securing privileged access for hybrid and cloud deployments](security-planning.md).
-
-## Establish emergency accounts
-
-To maintain access to your identity management store when issue arises, prepare emergency access accounts according to [Create emergency-access administrative accounts](security-emergency-access.md).
-
-## Secure your administrator roles
-
-Attackers who get control of privileged accounts can do tremendous damage, so protect these accounts first, using the [baseline access policy](https://cloudblogs.microsoft.com/enterprisemobility/2018/06/22/baseline-security-policy-for-azure-ad-admin-accounts-in-public-preview/) that is available by default to all Azure AD organizations (in public preview). The policy enforces multi-factor authentication on privileged Azure AD accounts. The following Azure AD roles are covered by the Azure AD baseline policy:
-
-* Global administrator
-* SharePoint administrator
-* Exchange administrator
-* Conditional Access administrator
-* Security administrator
-
-## Elevate privilege temporarily
-
-For most day-to-day activities, not all users need global administrator rights, and not all of them should be permanently assigned to the Global Administrator role. When users need the permissions of a Global Administrator, they should activate the role assignment in Azure AD [Privileged Identity Management](../privileged-identity-management/pim-configure.md) on either their own account or an alternate administrative account.
-
-## Next steps
-
-For a reference to the Azure AD role descriptions, see [Assign admin roles in Azure AD](permissions-reference.md)
active-directory Concept Understand Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/concept-understand-roles.md
Service-specific roles | Azure DevOps Administrator<br>Azure Information Protect
- [Overview of Azure AD role-based access control](custom-overview.md) - Create role assignments using [the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)-- [View the assignments for a role](custom-view-assignments.md)
+- [List role assignments](view-assignments.md)
active-directory Custom Available Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-available-permissions.md
Grants the same permissions as microsoft.directory/applications/permissions/upda
## Next steps - Create custom roles using [the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)-- [View the assignments for a custom role](custom-view-assignments.md)
+- [List role assignments](view-assignments.md)
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-enterprise-app-permissions.md
microsoft.directory/provisioningLogs/allProperties/read | Read all properties of
## Next steps - [Create custom roles using the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)-- [View the assignments for a custom role](custom-view-assignments.md)
+- [List role assignments](view-assignments.md)
active-directory Custom Enterprise Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-enterprise-apps.md
Title: Custom role permissions for enterprise app access assignments - Azure Active Directory | Microsoft Docs
+ Title: Create custom roles to manage enterprise apps in Azure Active Directory
description: Create and assign custom Azure AD roles for enterprise apps access in Azure Active Directory
-# Assign custom roles to manage enterprise apps in Azure Active Directory
+# Create custom roles to manage enterprise apps in Azure Active Directory
This article explains how to create a custom role with permissions to manage enterprise app assignments for users and groups in Azure Active Directory (Azure AD). For the elements of roles assignments and the meaning of terms such as subtype, permission, and property set, see the [custom roles overview](custom-overview.md).
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-overview.md
A role assignment is an Azure AD resource that attaches a *role definition* to a
- Role definition - Resource scope
-You can [create role assignments](custom-create.md) using the Azure portal, Azure AD PowerShell, or Graph API. You can also [view the assignments for a custom role](custom-view-assignments.md#view-the-assignments-of-a-role).
+You can [create role assignments](custom-create.md) using the Azure portal, Azure AD PowerShell, or Graph API. You can also [list the role assignments](view-assignments.md).
The following diagram shows an example of a role assignment. In this example, Chris Green has been assigned the App registration administrator custom role at the scope of the Contoso Widget Builder app registration. The assignment grants Chris the permissions of the App registration administrator role for only this specific app registration.
Using built-in roles in Azure AD is free, while custom roles requires an Azure A
- [Understand Azure AD roles](concept-understand-roles.md) - Create custom role assignments using [the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)-- [View the assignments for a custom role](custom-view-assignments.md)
+- [List role assignments](view-assignments.md)
active-directory Custom View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-view-assignments.md
- Title: View custom role assignments in the Azure AD portal | Microsoft Docs
-description: You can now see and manage members of an Azure AD administrator role in the Azure AD admin center.
------- Previously updated : 11/04/2020-----
-# View custom role assignments in Azure Active Directory
-
-This article describes how to view custom roles you have assigned in Azure Active Directory (Azure AD). In Azure Active Directory (Azure AD), roles can be assigned at an organization-wide scope or with a single-application scope.
--- Role assignments at the organization-wide scope are added to and can be seen in the list of single application role assignments.-- Role assignments at the single application scope aren't added to and can't be seen in the list of organization-wide scoped assignments.-
-## View role assignments in the Azure portal
-
-This procedure describes viewing assignments of a role with organization-wide scope.
-
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with Privileged role administrator or Global administrator permissions in the Azure AD organization.
-1. Select **Azure Active Directory**, select **Roles and administrators**, and then select a role to open it and view its properties.
-1. Select **Assignments** to view the assignments for the role.
-
- ![View role assignments and permissions when you open a role from the list](./media/custom-view-assignments/role-assignments.png)
-
-## View role assignments using Azure AD PowerShell
-
-This section describes viewing assignments of a role with organization-wide scope. This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](./custom-assign-powershell.md).
-
-### Prepare PowerShell
-
-First, you must [download the Azure AD preview PowerShell module](https://www.powershellgallery.com/packages/AzureAD/).
-
-To install the Azure AD PowerShell module, use the following commands:
-
-``` PowerShell
-Install-Module -Name AzureADPreview
-Import-Module -Name AzureADPreview
-```
-
-To verify that the module is ready to use, use the following command:
-
-``` PowerShell
-Get-Module -Name AzureADPreview
- ModuleType Version Name ExportedCommands
- - - -
- Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...}
-```
-
-### View the assignments of a role
-
-Example of viewing the assignments of a role.
-
-``` PowerShell
-# Fetch list of all directory roles with object ID
-Get-AzureADDirectoryRole
-
-# Fetch a specific directory role by ID
-$role = Get-AzureADDirectoryRole -ObjectId "5b3fe201-fa8b-4144-b6f1-875829ff7543"
-
-# Fetch role membership for a role
-Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId | Get-AzureADUser
-```
-
-## View role assignments using Microsoft Graph API
-
-This section describes viewing assignments of a role with organization-wide scope. To view single-application scope assignments using Graph API, you can use the operations in [Assign custom roles with Graph API](./custom-assign-graph.md).
-
-HTTP request to get a role assignment for a given role definition.
-
-GET
-
-``` HTTP
-https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments&$filter=roleDefinitionId eq ΓÇÿ<object-id-or-template-id-of-role-definition>ΓÇÖ
-```
-
-Response
-
-``` HTTP
-HTTP/1.1 200 OK
-{
- "id":"CtRxNqwabEKgwaOCHr2CGJIiSDKQoTVJrLE9etXyrY0-1"
- "principalId":"ab2e1023-bddc-4038-9ac1-ad4843e7e539",
- "roleDefinitionId":"3671d40a-1aac-426c-a0c1-a3821ebd8218",
- "resourceScopes":["/"]
-}
-```
-
-## View assignments of single-application scope
-
-This section describes viewing assignments of a role with single-application scope. This feature is currently in public preview.
-
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with Privileged role administrator or Global administrator permissions in the Azure AD organization.
-1. Select **App registrations**, and then select the app registration to view its properties. You might have to select **All applications** to see the complete list of app registrations in your Azure AD organization.
-
- ![Create or edit app registrations from the App registrations page](./media/custom-view-assignments/appreg-all-apps.png)
-
-1. In the app registration, select **Roles and administrators**, and then select a role to view its properties.
-
- ![View app registration role assignments from the App registrations page](./media/custom-view-assignments/appreg-assignments.png)
-
-1. Select **Assignments** to view the assignments for the role. Opening the assignments view from within the app registration shows you the assignments that are scoped to this Azure AD resource.
-
- ![View app registration role assignments from the properties of an app registration](./media/custom-view-assignments/appreg-assignments-2.png)
-
-## Next steps
-
-* Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
-* For more about roles and Administrator role assignment, see [Assign administrator roles](permissions-reference.md).
-* For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md).
active-directory Delegate App Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-app-roles.md
This article describes how to use permissions granted by custom roles in Azure A
- [Assigning a built-in administrative role](#assign-built-in-application-admin-roles) that grants access to manage configuration in Azure AD for all applications. This is the recommended way to grant IT experts access to manage broad application configuration permissions without granting access to manage other parts of Azure AD not related to application configuration. - [Creating a custom role](#create-and-assign-a-custom-role-preview) defining very specific permissions and assigning it to someone either to the scope of a single application as a limited owner, or at the directory scope (all applications) as a limited administrator.
-It's important to consider granting access using one of the above methods for two reasons. First, delegating the ability to perform administrative tasks reduces global administrator overhead. Second, using limited permissions improves your security posture and reduces the potential for unauthorized access. Delegation issues and general guidelines are discussed in [Delegate administration in Azure Active Directory](concept-delegation.md).
+It's important to consider granting access using one of the above methods for two reasons. First, delegating the ability to perform administrative tasks reduces global administrator overhead. Second, using limited permissions improves your security posture and reduces the potential for unauthorized access. For guidelines about role security planning, see [Securing privileged access for hybrid and cloud deployments in Azure AD](security-planning.md).
## Restrict who can create applications
active-directory M365 Workload Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/m365-workload-docs.md
# Roles for Microsoft 365 services in Azure Active Directory
-All products in Microsoft 365 can be managed with administrative roles in Azure Active Directory (Azure AD). Some products also provide additional roles that are specific to that product. For information on the roles supported by each product, see the table below. General discussions of delegation issues can be found in [Role delegation planning in Azure Active Directory](concept-delegation.md).
+All products in Microsoft 365 can be managed with administrative roles in Azure Active Directory (Azure AD). Some products also provide additional roles that are specific to that product. For information on the roles supported by each product, see the table below. For guidelines about role security planning, see [Securing privileged access for hybrid and cloud deployments in Azure AD](security-planning.md).
## Where to find content
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/manage-roles-portal.md
Title: View and assign administrator role permissions - Azure AD | Microsoft Docs
-description: You can now see and manage members of an Azure AD administrator role in the portal. For those who frequently manage role assignments.
+ Title: Assign Azure AD roles to users - Azure Active Directory
+description: Learn how to grant access to users in Azure Active Directory by assigning Azure AD roles.
Previously updated : 11/05/2020 Last updated : 03/07/2021
-# View and assign administrator roles in Azure Active Directory
+# Assign Azure AD roles to users
-You can now see and manage all the members of the administrator roles in the Azure Active Directory portal. If you frequently manage role assignments, you will probably prefer this experience. And if you ever wondered ΓÇ£What the heck do these roles really do?ΓÇ¥, you can see a detailed list of permissions for each of the Azure AD administrator roles.
+You can now see and manage all the members of the administrator roles in the Azure AD admin center. If you frequently manage role assignments, you will probably prefer this experience. This article describes how to assign Azure AD roles using the Azure AD admin center.
-## View all roles
+## Assign a role
+
+1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with Global Administrator or Privileged Role Administrator permissions.
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **Azure Active Directory**.
+1. Select **Azure Active Directory**.
1. Select **Roles and administrators** to see the list of all available roles.
-1. Select the ellipsis on the right of each row to see the permissions for the role. Select a role to view the users assigned to the role. If you see something different from the following picture, read the Note in [View assignments for privileged roles](#view-assignments-for-privileged-roles) to verify whether you're in Privileged Identity Management (PIM).
+ ![Screenshot of the Roles and administrators page](./media/manage-roles-portal/roles-and-administrators.png)
+
+1. Select a role to see its assignments.
+
+ To help you find the role you need, Azure AD can show you subsets of the roles based on role categories. Check out the **Type** filter to show you only the roles in the selected type.
+
+1. Select **Add assignments** and then select the users you want to assign to this role.
- ![list of roles in Azure AD portal](./media/manage-roles-portal/view-roles-in-azure-active-directory.png)
+ If you see something different from the following picture, read the Note in [Privileged Identity Management (PIM)](#privileged-identity-management-pim) to verify whether you are using PIM.
-## View my roles
+ ![list of permissions for an admin role](./media/manage-roles-portal/add-assignments.png)
-It's easy to view your own permissions as well. Select **Your Role** on the **Roles and administrators** page to see the roles that are currently assigned to you.
+1. Select **Add** to assign the role.
-## View assignments for privileged roles
+## Privileged Identity Management (PIM)
-You can select **Manage in PIM** for additional management capabilities. Privileged Role Administrators can change ΓÇ£PermanentΓÇ¥ (always active in the role) assignments to ΓÇ£EligibleΓÇ¥ (in the role only when elevated). If you don't have Privileged Identity Management, you can still select **Manage in PIM** to sign up for a trial. Privileged Identity Management requires an [Azure AD Premium P2 license plan](../privileged-identity-management/subscription-requirements.md).
+You can select **Manage in PIM** for additional management capabilities using [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md). Privileged Role Administrators can change ΓÇ£PermanentΓÇ¥ (always active in the role) assignments to ΓÇ£EligibleΓÇ¥ (in the role only when elevated). If you don't have Privileged Identity Management, you can still select **Manage in PIM** to sign up for a trial. Privileged Identity Management requires an [Azure AD Premium P2 license plan](../privileged-identity-management/subscription-requirements.md).
-![list of members of an admin role](./media/manage-roles-portal/member-list.png)
+![Screenshot that shows the "User administrator - Assignments" page with the "Manage in PIM" action selected](./media/manage-roles-portal/member-list-pim.png)
If you are a Global Administrator or a Privileged Role Administrator, you can easily add or remove members, filter the list, or select a member to see their active assigned roles.
If you are a Global Administrator or a Privileged Role Administrator, you can ea
> > ![Azure AD roles managed in PIM for users who already use PIM and have a Premium P2 license](./media/manage-roles-portal/pim-manages-roles-for-p2.png)
-## View a user's role permissions
-
-When you're viewing a role's members, select **Description** to see the complete list of permissions granted by the role assignment. The page includes links to relevant documentation to help guide you through managing directory roles.
-
-![Screenshot that shows the "Global administrator - Description" page.](./media/manage-roles-portal/role-description.png)
-
-## Download role assignments
-
-To download all assignments for a specific role, on the **Roles and administrators** page, select a role, and then select **Download role assignments**. A CSV file that lists assignments at all scopes for that role is downloaded.
-
-![download all assignments for a role](./media/manage-roles-portal/download-role-assignments.png)
-
-## Assign a role
-
-1. Sign in to the [Azure portal](https://portal.azure.com) with Global Administrator or Privileged Role Administrator permissions and select **Azure Active Directory**.
-
-1. Select **Roles and administrators** to see the list of all available roles.
-
-1. Select a role to see its assignments.
-
- ![Screenshot that shows the "User administrator - Assignments" page with the "Manage in PIM" action selected.](./media/manage-roles-portal/member-list.png)
-
-1. Select **Add assignments** and select the roles you want to assign. You can select **Manage in PIM** for additional management capabilities. If you see something different from the following picture, read the Note in [View assignments for privileged roles](#view-assignments-for-privileged-roles) to verify whether you're in PIM.
-
- ![list of permissions for an admin role](./media/manage-roles-portal/directory-role-select-role.png)
- ## Next steps * Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
-* For more about roles and Administrator role assignment, see [Assign administrator roles](permissions-reference.md).
+* For more about roles, see [Azure AD built-in roles](permissions-reference.md).
* For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 02/17/2021 Last updated : 03/13/2021
# Azure AD built-in roles
-Using Azure Active Directory (Azure AD), you can designate limited administrators to manage identity tasks in less-privileged roles. Administrators can be assigned for such purposes as adding or changing users, assigning administrative roles, resetting user passwords, managing user licenses, and managing domain names. The [default user permissions](../fundamentals/users-default-permissions.md) can be changed only in user settings in Azure AD.
+In Azure Active Directory (Azure AD), if another administrator or non-administrator needs to manage Azure AD resources, you assign them an Azure AD role that provides the permissions they need. For example, you can assign roles to allow adding or changing users, resetting user passwords, managing user licenses, or managing domain names.
+
+This article lists the Azure AD built-in roles you can assign to allow management of Azure AD resources. For information about how to assign roles, see [Assign Azure AD roles to users](manage-roles-portal.md).
## Limit use of Global Administrator
Users who are assigned to the Global Administrator role can read and modify ever
As a best practice, we recommend that you assign this role to fewer than five people in your organization. If you have more than five admins assigned to the Global Administrator role in your organization, here are some ways to reduce its use.
-### Find the role you need
-
-If it's frustrating for you to find the role you need out of a list of many roles, Azure AD can show you subsets of the roles based on role categories. Check out our new **Type** filter for [Azure AD Roles and administrators](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators) to show you only the roles in the selected type.
-
-### A role exists now that didn't exist when you assigned the Global Administrator role
-
-It's possible that a role or roles were added to Azure AD that provide more granular permissions that were not an option when you elevated some users to Global Administrator. Over time, we are rolling out additional roles that accomplish tasks that only the Global Administrator role could do before. You can see these reflected in the following [All roles](#all-roles).
-
-## Assign or remove administrator roles
-
-To learn how to assign administrative roles to a user in Azure Active Directory, see [View and assign administrator roles in Azure Active Directory](manage-roles-portal.md).
-
-> [!Note]
-> If you have an Azure AD premium P2 license and you're already a Privileged Identity Management (PIM) user, all role management tasks are performed in Privilege Identity Management and not in Azure AD.
->
-> ![Azure AD roles managed in PIM for users who already use PIM and have a Premium P2 license](./media/permissions-reference/pim-manages-roles-for-p2.png)
- ## All roles > [!div class="mx-tableFixed"]
To learn how to assign administrative roles to a user in Azure Active Directory,
> | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the M365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d | > | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product. | 3a2c62db-5318-420d-8d74-23affee5d9d5 | > | [Kaizala Administrator](#kaizala-administrator) | Can manage settings for Microsoft Kaizala. | 74ef975b-6605-40af-a5d2-b9539d836353 |
+> | [Knowledge Administrator](#knowledge-administrator) | Can configure knowledge, learning, and other intelligent features. | b5a8dcf3-09d5-43a9-a639-8e29ef291470 |
> | [License Administrator](#license-administrator) | Can manage product licenses on users and groups. | 4d6ac14f-3453-41d0-bef9-a3e0c569773a | > | [Message Center Privacy Reader](#message-center-privacy-reader) | Can read security messages and updates in Office 365 Message Center only. | ac16e43d-7b2d-40e0-ac05-243ff356ab5b | > | [Message Center Reader](#message-center-reader) | Can read messages and updates for their organization in Office 365 Message Center only. | 790c1fb9-7f7d-4f88-86a1-ef1f95c05c1b |
This role also grants the ability to consent for delegated permissions and appli
> | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning syncronization jobs | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning syncronization jobs and schema | > | microsoft.directory/servicePrincipals/managePasswordSingleSignOnCredentials | Read password single sign-on credentials on service principals |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-application-admin | Grant consent for application permissions and delegated permissions on behalf of any user or all users, except for application permissions for Microsoft Graph and Azure AD Graph |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-application-admin | Grant consent for application permissions and delegated permissions on behalf of any user or all users, except for application permissions for Microsoft Graph and Azure AD Graph |
> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals |
Users with this role have global permissions within Microsoft Exchange Online, w
> | microsoft.directory/groups.unified/owners/update | Update owners of Microsoft 365 groups with the exclusion of role-assignable groups | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
-> | microsoft.office365.exchange/allEntities/allTasks | Manage all aspects of Exchange Online |
+> | microsoft.office365.exchange/allEntities/basic/allTasks | Manage all aspects of Exchange Online |
> | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
Users with this role have access to all administrative features in Azure Active
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/accessReviews/allProperties/allTasks | Create and delete access reviews, and read and update all properties of access reviews in Azure AD |
> | microsoft.directory/administrativeUnits/allProperties/allTasks | Create and manage administrative units (including members) | > | microsoft.directory/applications/allProperties/allTasks | Create and delete applications, and read and update all properties | > | microsoft.directory/applications/synchronization/standard/read | Read provisioning settings associated with the application object |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties | > | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management | > | microsoft.directory/groups/allProperties/allTasks | Create and delete groups, and read and update all properties |
-> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update groups with isAssignableToRole property set to true |
-> | microsoft.directory/groupsAssignableToRoles/create | Create groups with isAssignableToRole property set to true |
-> | microsoft.directory/groupsAssignableToRoles/delete | Delete groups with isAssignableToRole property set to true |
+> | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups |
+> | microsoft.directory/groupsAssignableToRoles/delete | Delete role-assignable groups |
+> | microsoft.directory/groupsAssignableToRoles/restore | Restore role-assignable groups |
+> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update role-assignable groups |
> | microsoft.directory/groupSettings/allProperties/allTasks | Create and delete group settings, and read and update all properties | > | microsoft.directory/groupSettingTemplates/allProperties/allTasks | Create and delete group setting templates, and read and update all properties | > | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/serviceAction/getAvailableExtentionProperties | Can perform the Getavailableextentionproperties service action | > | microsoft.directory/servicePrincipals/allProperties/allTasks | Create and delete service principals, and read and update all properties | > | microsoft.directory/servicePrincipals/managePermissionGrantsForAll.microsoft-company-admin | Grant consent for any permission to any application |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
> | microsoft.directory/servicePrincipals/synchronization/standard/read | Read provisioning settings associated with your service principal | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/subscribedSkus/allProperties/allTasks | Buy and manage subscriptions and delete subscriptions |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/permissionGrantPolicies/delete | Delete permission grant policies | > | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/permissionGrantPolicies/basic/update | Update basic properties of permission grant policies |
+> | microsoft.directory/servicePrincipalCreationPolicies/create | Create service principal creation policies |
+> | microsoft.directory/servicePrincipalCreationPolicies/delete | Delete service principal creation policies |
+> | microsoft.directory/servicePrincipalCreationPolicies/standard/read | Read standard properties of service principal creation policies |
+> | microsoft.directory/servicePrincipalCreationPolicies/basic/update | Update basic properties of service principal creation policies |
> | microsoft.azure.advancedThreatProtection/allEntities/allTasks | Manage all aspects of Azure Advanced Threat Protection | > | microsoft.azure.informationProtection/allEntities/allTasks | Manage all aspects of Azure Information Protection | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
Users with this role have access to all administrative features in Azure Active
> | microsoft.intune/allEntities/allTasks | Manage all aspects of Microsoft Intune | > | microsoft.office365.complianceManager/allEntities/allTasks | Manage all aspects of Office 365 Compliance Manager | > | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics |
-> | microsoft.office365.exchange/allEntities/allTasks | Manage all aspects of Exchange Online |
+> | microsoft.office365.exchange/allEntities/basic/allTasks | Manage all aspects of Exchange Online |
> | microsoft.office365.lockbox/allEntities/allTasks | Manage all aspects of Customer Lockbox | > | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center |
-> | microsoft.office365.protectionCenter/allEntities/allProperties/allTasks | Manage all aspects of Office 365 Protection Center |
+> | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
+> | microsoft.office365.protectionCenter/allEntities/allProperties/allTasks | Manage all aspects of the Security and Compliance centers |
> | microsoft.office365.search/content/manage | Create and delete content, and read and update all properties in Microsoft Search | > | microsoft.office365.securityComplianceCenter/allEntities/allTasks | Create and delete all resources, and read and update standard properties in the Microsoft 365 Security and Compliance Center | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.directory/users/strongAuthentication/read | Read the strong authentication property for users | > | microsoft.commerce.billing/allEntities/read | Read all resources of Office 365 billing |
-> | microsoft.office365.exchange/allEntities/read | Read all resources of Exchange Online |
+> | microsoft.office365.exchange/allEntities/standard/read | Read all resources of Exchange Online |
> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
-> | microsoft.office365.protectionCenter/allEntities/allProperties/read | Read all aspects of Office 365 Protection Center |
+> | microsoft.office365.protectionCenter/allEntities/allProperties/read | Read all properties in the Security and Compliance centers |
> | microsoft.office365.securityComplianceCenter/allEntities/read | Read standard properties in Microsoft 365 Security and Compliance Center | > | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can create/manage groups and its settings like naming and exp
> | microsoft.directory/groups/owners/update | Update owners of groups, excluding role-assignable groups | > | microsoft.directory/groups/settings/update | Update settings of groups | > | microsoft.directory/groups/visibility/update | Update the visibility property of groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
This role can create and manage all security groups. However, Intune Admin does
> | microsoft.directory/groups.security/basic/update | Update basic properties on Security groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.security/classification/update | Update classification property of the Security groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.security/dynamicMembershipRule/update | Update dynamicMembershipRule property of the Security groups with the exclusion of role-assignable groups |
-> | microsoft.directory/groups.security/groupType/update | Update group type property of the Security groups with the exclusion of role-assignable groups |
> | microsoft.directory/groups.security/members/update | Update members of Security groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.security/owners/update | Update owners of Security groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.security/visibility/update | Update visibility property of the Security groups with the exclusion of role-assignable groups |
Users with this role have global permissions to manage settings within Microsoft
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Knowledge Administrator
+
+Users in this role have full access to all knowledge, learning and intelligent features settings in the Microsoft 365 admin center. They have a general understanding of the suite of products, licensing details and has responsibility to control access. Knowledge administrator can create and manage content, like topics, acronyms and learning resources. Additionally, these users can create content centers, monitor service health, and create service requests.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/groups.security/create | Create Security groups with the exclusion of role-assignable groups |
+> | microsoft.directory/groups.security/createAsOwner | Create Security groups with the exclusion of role-assignable groups and creator is added as the first owner |
+> | microsoft.directory/groups.security/delete | Delete Security groups with the exclusion of role-assignable groups |
+> | microsoft.directory/groups.security/basic/update | Update basic properties on Security groups with the exclusion of role-assignable groups |
+> | microsoft.directory/groups.security/members/update | Update members of Security groups with the exclusion of role-assignable groups |
+> | microsoft.directory/groups.security/owners/update | Update owners of Security groups with the exclusion of role-assignable groups |
+> | microsoft.office365.knowledge/contentUnderstanding/allProperties/allTasks | Read and update all properties of content understanding in Microsoft 365 admin center |
+> | microsoft.office365.knowledge/knowledgeNetwork/allProperties/allTasks | Read and update all properties of knowledge network in Microsoft 365 admin center |
+> | microsoft.office365.protectionCenter/sensitivityLabels/allProperties/read | Read sensitivity labels in the Security and Compliance centers |
+> | microsoft.office365.sharePoint/allEntities/allTasks | Create and delete all resources, and read and update standard properties in SharePoint |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## License Administrator Users in this role can add, remove, and update license assignments on users, groups (using group-based licensing), and manage the usage location on users. The role does not grant the ability to purchase or manage subscriptions, create or manage groups, or create or manage users beyond the usage location. This role has no access to view, create, or manage support tickets.
Users with this role can manage role assignments in Azure Active Directory, as w
> | microsoft.directory/appRoleAssignments/allProperties/allTasks | Create and delete appRoleAssignments, and read and update all properties | > | microsoft.directory/authorizationPolicy/allProperties/allTasks | Manage all aspects of authorization policies | > | microsoft.directory/directoryRoles/allProperties/allTasks | Create and delete directory roles, and read and update all properties |
-> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update groups with isAssignableToRole property set to true |
-> | microsoft.directory/groupsAssignableToRoles/create | Create groups with isAssignableToRole property set to true |
-> | microsoft.directory/groupsAssignableToRoles/delete | Delete groups with isAssignableToRole property set to true |
+> | microsoft.directory/groupsAssignableToRoles/create | Create role-assignable groups |
+> | microsoft.directory/groupsAssignableToRoles/delete | Delete role-assignable groups |
+> | microsoft.directory/groupsAssignableToRoles/restore | Restore role-assignable groups |
+> | microsoft.directory/groupsAssignableToRoles/allProperties/update | Update role-assignable groups |
> | microsoft.directory/oAuth2PermissionGrants/allProperties/allTasks | Create and delete OAuth 2.0 permission grants, and read and update all properties | > | microsoft.directory/privilegedIdentityManagement/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Privileged Identity Management | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties |
Users with this role can view usage reporting data and the reports dashboard in
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
+> | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Windows Defender ATP and EDR | Assign roles<br>Manage machine groups<br>Configur
> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
-> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in Office 365 Protection Center |
-> | microsoft.office365.protectionCenter/allEntities/basic/update | Update basic properties of all resources in Office 365 Protection Center |
+> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers |
+> | microsoft.office365.protectionCenter/allEntities/basic/update | Update basic properties of all resources in the Security and Compliance centers |
> | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/allTasks | Create and manage attack payloads in Attack Simulator | > | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training | > | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/allTasks | Create and manage attack simulation templates in Attack Simulator |
Windows Defender ATP and EDR | All permissions of the Security Reader role<br>Vi
> | microsoft.directory/cloudAppSecurity/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Microsoft Cloud App Security | > | microsoft.directory/identityProtection/allProperties/allTasks | Create and delete all resources, and read and update standard properties in Azure AD Identity Protection | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
+> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs |
> | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.advancedThreatProtection/allEntities/allTasks | Manage all aspects of Azure Advanced Threat Protection | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
Windows Defender ATP and EDR | View and investigate alerts. When you turn on rol
> | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |
-> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in Office 365 Protection Center |
+> | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers |
> | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/read | Read all properties of attack payloads in Attack Simulator | > | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training | > | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/read | Read all properties of attack simulation templates in Attack Simulator |
Users with this role can open support requests with Microsoft for Azure and Micr
> | | | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
+> | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.directory/groups.unified/basic/update | Update basic properties on Microsoft 365 groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.unified/members/update | Update members of Microsoft 365 groups with the exclusion of role-assignable groups | > | microsoft.directory/groups.unified/owners/update | Update owners of Microsoft 365 groups with the exclusion of role-assignable groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant a service pricipal direct access to a group's data |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
Users with this role can access tenant level aggregated data and associated insi
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
> | microsoft.office365.usageReports/allEntities/standard/read | Read tenant-level aggregated Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
## Next steps
-* To learn more about how to assign a user as an administrator of an Azure subscription, see [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
-* To learn more about how resource access is controlled in Microsoft Azure, see [Understand the different roles](../../role-based-access-control/rbac-and-directory-admin-roles.md)
-* For details on the relationship between subscriptions and an Azure AD tenant, or for instructions to associate or add a subscription, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../fundamentals/active-directory-how-subscriptions-associated-directory.md)
+- [Assign Azure AD roles to groups](groups-assign-role.md)
+- [Understand the different roles](../../role-based-access-control/rbac-and-directory-admin-roles.md)
+- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
active-directory Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/role-definitions-list.md
+
+ Title: List Azure AD role definitions - Azure AD
+description: Learn how to list Azure built-in and custom roles.
+++++++ Last updated : 03/07/2021++++++
+# List Azure AD role definitions
+
+A role definition is a collection of permissions that can be performed, such as read, write, and delete. It's typically just called a role. Azure Active Directory has over 60 built-in roles or you can create your own custom roles. If you ever wondered "What the do these roles really do?", you can see a detailed list of permissions for each of the roles.
+
+This article describes how to list the Azure AD built-in and custom roles along with their permissions.
+
+## List all roles
+
+1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) and select **Azure Active Directory**.
+
+1. Select **Roles and administrators** to see the list of all available roles.
+
+ ![list of roles in Azure AD portal](./media/role-definitions-list/view-roles-in-azure-active-directory.png)
+
+1. On the right, select the ellipsis and then **Description** to see the complete list of permissions for a role.
+
+ The page includes links to relevant documentation to help guide you through managing roles.
+
+ ![Screenshot that shows the "Global administrator - Description" page.](./media/role-definitions-list/role-description.png)
+
+## Next steps
+
+* Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
+* For more about roles and Administrator role assignment, see [Assign administrator roles](permissions-reference.md).
+* For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md).
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/view-assignments.md
Title: View custom role assignments in the Azure Active Directory portal | Microsoft Docs
+ Title: List Azure AD role assignments
description: You can now see and manage members of an Azure Active Directory administrator role in the Azure Active Directory admin center.
-# View custom role assignments using Azure Active Directory
+# List Azure AD role assignments
-This article describes how to view custom roles you have assigned in Azure Active Directory (Azure AD). In Azure Active Directory (Azure AD), roles can be assigned at an organization-wide scope or with a single-application scope.
+This article describes how to list roles you have assigned in Azure Active Directory (Azure AD). In Azure Active Directory (Azure AD), roles can be assigned at an organization-wide scope or with a single-application scope.
- Role assignments at the organization-wide scope are added to and can be seen in the list of single application role assignments. - Role assignments at the single application scope aren't added to and can't be seen in the list of organization-wide scoped assignments.
-## View role assignments in the Azure portal
+## List role assignments in the Azure portal
-This procedure describes viewing assignments of a role with organization-wide scope.
+This procedure describes how to list role assignments with organization-wide scope.
1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with Privileged role administrator or Global administrator permissions in the Azure AD organization. 1. Select **Azure Active Directory**, select **Roles and administrators**, and then select a role to open it and view its properties.
-1. Select **Assignments** to view the assignments for the role.
+1. Select **Assignments** to list the role assignments.
- ![View role assignments and permissions when you open a role from the list](./media/view-assignments/role-assignments.png)
+ ![List role assignments and permissions when you open a role from the list](./media/view-assignments/role-assignments.png)
-## View role assignments using Azure AD PowerShell
+## List my role assignments
+
+It's easy to list your own permissions as well. Select **Your Role** on the **Roles and administrators** page to see the roles that are currently assigned to you.
+
+## Download role assignments
+
+To download all assignments for a specific role, on the **Roles and administrators** page, select a role, and then select **Download role assignments**. A CSV file that lists assignments at all scopes for that role is downloaded.
+
+![download all assignments for a role](./media/view-assignments/download-role-assignments.png)
+
+## List role assignments using Azure AD PowerShell
This section describes viewing assignments of a role with organization-wide scope. This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](custom-assign-powershell.md).
Get-Module -Name AzureADPreview
Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...} ```
-### View the assignments of a role
+### List role assignments
-Example of viewing the assignments of a role.
+Example of listing the role assignments.
``` PowerShell # Fetch list of all directory roles with object ID
$role = Get-AzureADDirectoryRole -ObjectId "5b3fe201-fa8b-4144-b6f1-875829ff7543
Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId | Get-AzureADUser ```
-## View role assignments using Microsoft Graph API
+## List role assignments using Microsoft Graph API
-This section describes viewing assignments of a role with organization-wide scope. To view single-application scope assignments using Graph API, you can use the operations in [Assign custom roles with Graph API](custom-assign-graph.md).
+This section describes how to list role assignments with organization-wide scope. To list single-application scope role assignments using Graph API, you can use the operations in [Assign custom roles with Graph API](custom-assign-graph.md).
HTTP request to get a role assignment for a given role definition.
HTTP/1.1 200 OK
} ```
-## View assignments of single-application scope
+## List role assignments with single-application scope
-This section describes viewing assignments of a role with single-application scope. This feature is currently in public preview.
+This section describes how to list role assignments with single-application scope. This feature is currently in public preview.
1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com) with Privileged role administrator or Global administrator permissions in the Azure AD organization. 1. Select **App registrations**, and then select the app registration to view its properties. You might have to select **All applications** to see the complete list of app registrations in your Azure AD organization.
This section describes viewing assignments of a role with single-application sco
1. In the app registration, select **Roles and administrators**, and then select a role to view its properties.
- ![View app registration role assignments from the App registrations page](./media/view-assignments/app-reg-assignments.png)
+ ![List app registration role assignments from the App registrations page](./media/view-assignments/app-reg-assignments.png)
-1. Select **Assignments** to view the assignments for the role. Opening the assignments view from within the app registration shows you the assignments that are scoped to this Azure AD resource.
+1. Select **Assignments** to list the role assignments. Opening the assignments page from within the app registration shows you the role assignments that are scoped to this Azure AD resource.
- ![View app registration role assignments from the properties of an app registration](./media/view-assignments/app-reg-assignments-2.png)
+ ![List app registration role assignments from the properties of an app registration](./media/view-assignments/app-reg-assignments-2.png)
## Next steps
active-directory User Help Auth App Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-sign-in.md
Previously updated : 06/03/2020 Last updated : 03/12/2021
Open the Microsoft Authenticator app, go to your work or school account, and tur
- **If youΓÇÖve already been using the app for two-factor verification**, you can tap the account tile to see a full screen view of the account. Then tap **Enable phone sign-in** to turn on phone sign-in. - **If you can't find your work or school account** on the **Accounts** screen of the app, it means that you haven't added it to the app yet. Add your work or school account by following the steps in the [Add your work or school account help](user-help-auth-app-add-work-school-account.md).
+> [!NOTE]
+> Microsoft doesn't support a combination of device registration and certificate-based authentication in Authenticator on iOS. Instead, the user must register the device manually through Authenticator settings before signing in.
+ After you turn on phone sign-in, you can sign in using only the Microsoft Authenticator app. Here's how: 1. Sign in to your work or school account.
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
Title: Introduction to Azure Kubernetes Service
description: Learn the features and benefits of Azure Kubernetes Service to deploy and manage container-based applications in Azure. Previously updated : 02/09/2021 Last updated : 02/24/2021
-# Azure Kubernetes Service (AKS)
+# Azure Kubernetes Service
-Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading much of the complexity and operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks for you, like health monitoring and maintenance.
+Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.
-Since the Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, as a managed Kubernetes service, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.
+You can create an AKS cluster using:
+* [The Azure CLI](kubernetes-walkthrough.md)
+* [The Azure portal](kubernetes-walkthrough-portal.md)
+* [Azure PowerShell](kubernetes-walkthrough-powershell.md)
+* Using template-driven deployment options, like [Azure Resource Manager templates](kubernetes-walkthrough-rm-template.md) and Terraform
-You can create an AKS cluster using the Azure portal, the Azure CLI, Azure PowerShell, or using template-driven deployment options, such as Resource Manager templates and Terraform. When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you.
-Additional features such as advanced networking, Azure Active Directory integration, and monitoring can also be configured during the deployment process. Windows Server containers are supported in AKS.
+When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Advanced networking, Azure Active Directory (Azure AD) integration, monitoring, and other features can be configured during the deployment process.
For more information on Kubernetes basics, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
-To get started, complete the AKS Quickstart [in the Azure portal][aks-portal] or [with the Azure CLI][aks-cli].
- [!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service.md)]
+> AKS also supports Windows Server containers.
## Access, security, and monitoring
-For improved security and management, AKS lets you integrate with Azure Active Directory (Azure AD) and:
+For improved security and management, AKS lets you integrate with Azure AD to:
* Use Kubernetes role-based access control (Kubernetes RBAC). * Monitor the health of your cluster and resources. ### Identity and security management
-To limit access to cluster resources, AKS supports [Kubernetes RBAC][kubernetes-rbac]. Kubernetes RBAC lets you control access and permissions to Kubernetes resources and namespaces.
+#### Kubernetes RBAC
+
+To limit access to cluster resources, AKS supports [Kubernetes RBAC][kubernetes-rbac]. Kubernetes RBAC controls access and permissions to Kubernetes resources and namespaces.
-You can also configure an AKS cluster to integrate with Azure AD. With Azure AD integration, you can configure Kubernetes access based on existing identity and group membership. Your existing Azure AD users and groups can be provided with an integrated sign-on experience and access to AKS resources.
+#### Azure AD
+
+You can configure an AKS cluster to integrate with Azure AD. With Azure AD integration, you can set up Kubernetes access based on existing identity and group membership. Your existing Azure AD users and groups can be provided with an integrated sign-on experience and access to AKS resources.
For more information on identity, see [Access and identity options for AKS][concepts-identity].
To secure your AKS clusters, see [Integrate Azure Active Directory with AKS][aks
### Integrated logging and monitoring
-Azure Monitor for Container Health collects memory and processor performance metrics from containers, nodes, and controllers within your AKS cluster and deployed applications. You can review both the container logs and [the Kubernetes master logs][aks-master-logs]. This monitoring data is stored in an Azure Log Analytics workspace and is available through the Azure portal, Azure CLI, or a REST endpoint.
+Azure Monitor for Container Health collects memory and processor performance metrics from containers, nodes, and controllers within your AKS cluster and deployed applications. You can review both container logs and [the Kubernetes master logs][aks-master-logs], which are:
+* Stored in an Azure Log Analytics workspace.
+* Available through the Azure portal, Azure CLI, or a REST endpoint.
For more information, see [Monitor Azure Kubernetes Service container health][container-health].
For more information, see [Monitor Azure Kubernetes Service container health][co
AKS nodes run on Azure virtual machines (VMs). With AKS nodes, you can connect storage to nodes and pods, upgrade cluster components, and use GPUs. AKS supports Kubernetes clusters that run multiple node pools to support mixed operating systems and Windows Server containers.
-For more information regarding Kubernetes cluster, node, and node pool capabilities, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
+For more information about Kubernetes cluster, node, and node pool capabilities, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
### Cluster node and pod scaling
-As demand for resources change, the number of cluster nodes or pods that run your services can automatically scale up or down. You can use both the horizontal pod autoscaler or the cluster autoscaler. This approach to scaling lets the AKS cluster automatically adjust to demands and only run the resources needed.
+As demand for resources change, the number of cluster nodes or pods that run your services automatically scales up or down. You can adjust both the horizontal pod autoscaler or the cluster autoscaler to adjust to demands and only run necessary resources.
For more information, see [Scale an Azure Kubernetes Service (AKS) cluster][aks-scale]. ### Cluster node upgrades
-AKS offers multiple Kubernetes versions. As new versions become available in AKS, your cluster can be upgraded using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.
+AKS offers multiple Kubernetes versions. As new versions become available in AKS, you can upgrade your cluster using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.
To learn more about lifecycle versions, see [Supported Kubernetes versions in AKS][aks-supported versions]. For steps on how to upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
For more information, see [Confidential computing nodes on AKS][conf-com-node].
### Storage volume support
-To support application workloads, you can mount storage volumes for persistent data. You can use both static and dynamic volumes. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by either Azure Disks for single pod access, or Azure Files for multiple concurrent pod access.
+To support application workloads, you can mount static or dynamic storage volumes for persistent data. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by either:
+* Azure Disks for single pod access, or
+* Azure Files for multiple, concurrent pod access.
For more information, see [Storage options for applications in AKS][concepts-storage].
Get started with dynamic persistent volumes using [Azure Disks][azure-disk] or [
## Virtual networks and ingress
-An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster and other nodes in the virtual network. Pods can also connect to other services in a peered virtual network and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
+An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with:
+* Other pods in the cluster
+* Other nodes in the virtual network.
+
+Pods can also connect to other services in a peered virtual network and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
For more information, see the [Network concepts for applications in AKS][aks-networking]. ### Ingress with HTTP application routing
-The HTTP application routing add-on makes it easy to access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster.
+The HTTP application routing add-on helps you easily access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster.
As applications are deployed, publicly accessible DNS names are autoconfigured. The HTTP application routing sets up a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.
To get started with ingress traffic, see [HTTP application routing][aks-http-rou
## Development tooling integration
-Kubernetes has a rich ecosystem of development and management tools that work seamlessly with AKS. These tools include Helm and the Kubernetes extension for Visual Studio Code. These tools work seamlessly with AKS.
+Kubernetes has a rich ecosystem of development and management tools that work seamlessly with AKS. These tools include Helm and the Kubernetes extension for Visual Studio Code.
+
+Azure provides several tools that help streamline Kubernetes, such as Azure Dev Spaces and DevOps Starter.
+
+### Azure Dev Spaces
+
+Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal configuration, you can run and debug containers directly in AKS. To get started, see [Azure Dev Spaces][azure-dev-spaces].
-Additionally, Azure provides several tools that help streamline Kubernetes, such as DevOps Starter.
+### DevOps Starter
DevOps Starter provides a simple solution for bringing existing code and Git repositories into Azure. DevOps Starter automatically: * Creates Azure resources (such as AKS);
AKS is compliant with SOC, ISO, PCI DSS, and HIPAA. For more information, see [O
Learn more about deploying and managing AKS with the Azure CLI Quickstart. > [!div class="nextstepaction"]
-> [AKS quickstart][aks-cli]
+> [Deploy an AKS Cluster using Azure CLI][aks-cli]
<!-- LINKS - external --> [aks-engine]: https://github.com/Azure/aks-engine
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 01/13/2021 Last updated : 03/15/2021
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you deploy an AKS cluster using the Azure portal. A multi-container application that includes a web front end and a Redis instance is run in the cluster. You then see how to monitor the health of the cluster and pods that run your application.
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+* Deploy an AKS cluster using the Azure portal.
+* Run a multi-container application with a web front-end and a Redis instance in the cluster.
+* Monitor the health of the cluster and pods that run your application.
![Image of browsing to Azure Vote sample application](media/container-service-kubernetes-walkthrough/azure-voting-application.png)
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Sign in to Azure
+## Prerequisites
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). ## Create an AKS cluster
-To create an AKS cluster, complete the following steps:
- 1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Containers** > **Kubernetes Service**.
+2. Select **Containers** > **Kubernetes Service**.
3. On the **Basics** page, configure the following options:
- - **Project details**: Select an Azure **Subscription**, then select or create an Azure **Resource group**, such as *myResourceGroup*.
- - **Cluster details**: Enter a **Kubernetes cluster name**, such as *myAKSCluster*. Select a **Region** and **Kubernetes version** for the AKS cluster.
- - **Primary node pool**: Select a VM **Node size** for the AKS nodes. The VM size *can't* be changed once an AKS cluster has been deployed.
- - Select the number of nodes to deploy into the cluster. For this quickstart, set **Node count** to *1*. Node count *can* be adjusted after the cluster has been deployed.
+ - **Project details**:
+ * Select an Azure **Subscription**.
+ * Select or create an Azure **Resource group**, such as *myResourceGroup*.
+ - **Cluster details**:
+ * Enter a **Kubernetes cluster name**, such as *myAKSCluster*.
+ * Select a **Region** and **Kubernetes version** for the AKS cluster.
+ - **Primary node pool**:
+ * Select a VM **Node size** for the AKS nodes. The VM size *cannot* be changed once an AKS cluster has been deployed.
+ * Select the number of nodes to deploy into the cluster. For this quickstart, set **Node count** to *1*. Node count *can* be adjusted after the cluster has been deployed.
![Create AKS cluster - provide basic information](media/kubernetes-walkthrough-portal/create-cluster-basics.png)
- Select **Next: Node pools** when complete.
+4. Select **Next: Node pools** when complete.
-4. On the **Node pools** page, keep the default options. At the bottom of the screen, click **Next: Authentication**.
+5. Keep the default **Node pools** options. At the bottom of the screen, click **Next: Authentication**.
> [!CAUTION]
- > Creating new cluster identity may take multiple minutes to propagate and become available causing Service Principal not found errors and validation failures in Azure portal. If you hit this please visit [Troubleshoot common Azure Kubernetes Service problems](troubleshooting.md#received-an-error-saying-my-service-principal-wasnt-found-or-is-invalid-when-i-try-to-create-a-new-cluster) for mitigation.
+ > Newly created Azure AD service principals may take several minutes to propagate and become available, causing "service principal not found" errors and validation failures in Azure portal. If you hit this bump, please visit [our troubleshooting article](troubleshooting.md#received-an-error-saying-my-service-principal-wasnt-found-or-is-invalid-when-i-try-to-create-a-new-cluster) for mitigation.
+
+6. On the **Authentication** page, configure the following options:
+ - Create a new cluster identity by either:
+ * Leaving the **Authentication** field with **System-assinged managed identity**, or
+ * Choosing **Service Principal** to use a service principal.
+ * Select *(new) default service principal* to create a default service principal, or
+ * Select *Configure service principal* to use an existing one. You will need to provide the existing principal's SPN client ID and secret.
+ - Enable the Kubernetes role-based access control (Kubernetes RBAC) option to provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.
-5. On the **Authentication** page, configure the following options:
- - Create a new cluster identity by leaving the **Authentication** field with **System-assinged managed identity**. Alternatively, you can choose **Service Principal** to use a service principal. Select *(new) default service principal* to create a default service principal or *Configure service principal* to use an existing one. If you use an existing one, you will need to provide the SPN client ID and secret.
- - Enable the option for Kubernetes role-based access control (Kubernetes RBAC). This will provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.
+ By default, *Basic* networking is used, and Azure Monitor for containers is enabled.
-By default, *Basic* networking is used, and Azure Monitor for containers is enabled. Click **Review + create** and then **Create** when validation completes.
+7. Click **Review + create** and then **Create** when validation completes.
-It takes a few minutes to create the AKS cluster. When your deployment is complete, click **Go to resource**, or browse to the AKS cluster resource group, such as *myResourceGroup*, and select the AKS resource, such as *myAKSCluster*. The AKS cluster dashboard is shown, as in this example:
-![Example AKS dashboard in the Azure portal](media/kubernetes-walkthrough-portal/aks-portal-dashboard.png)
+8. It takes a few minutes to create the AKS cluster. When your deployment is complete, navigate to your resource by either:
+ * Clicking **Go to resource**, or
+ * Browsing to the AKS cluster resource group and selecting the AKS resource.
+ * Per example cluster dashboard below: browsing for *myResourceGroup* and selecting *myAKSCluster* resource.
+
+ ![Example AKS dashboard in the Azure portal](media/kubernetes-walkthrough-portal/aks-portal-dashboard.png)
## Connect to the cluster
-To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. The `kubectl` client is pre-installed in the Azure Cloud Shell.
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-Open Cloud Shell using the `>_` button on the top of the Azure portal.
+1. Open Cloud Shell using the `>_` button on the top of the Azure portal.
-![Open the Azure Cloud Shell in the portal](media/kubernetes-walkthrough-portal/aks-cloud-shell.png)
+ ![Open the Azure Cloud Shell in the portal](media/kubernetes-walkthrough-portal/aks-cloud-shell.png)
-> [!NOTE]
-> To perform these operations in a local shell installation, you'll first need to verify Azure CLI is installed, then connect to Azure via the `az login` command.
+ > [!NOTE]
+ > To perform these operations in a local shell installation:
+ > 1. Verify Azure CLI is installed.
+ > 2. Connect to Azure via the `az login` command.
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. The following example gets credentials for the cluster name *myAKSCluster* in the resource group named *myResourceGroup*:
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them.
-```azurecli
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
+ ```azurecli
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
-To verify the connection to your cluster, use the `kubectl get` command to return a list of the cluster nodes.
+3. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes.
-```console
-kubectl get nodes
-```
+ ```console
+ kubectl get nodes
+ ```
-The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
+ Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
-```output
-NAME STATUS ROLES AGE VERSION
-aks-agentpool-14693408-0 Ready agent 15m v1.11.5
-```
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-14693408-0 Ready agent 15m v1.11.5
+ ```
## Run the application
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. In this quickstart, a manifest is used to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments - one for the sample Azure Vote Python applications, and the other for a Redis instance. Two Kubernetes Services are also created - an internal service for the Redis instance, and an external service to access the Azure Vote application from the internet.
-
-In the Cloud Shell, use an editor to create a file named `azure-vote.yaml`, such as `code azure-vote.yaml`, `nano azure-vote.yaml` or `vi azure-vote.yaml`. Then copy in the following YAML definition:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-back
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
- metadata:
- labels:
- app: azure-vote-back
- spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-back
-spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-front
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
- metadata:
- labels:
- app: azure-vote-front
- spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-front
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
-```
+A Kubernetes manifest file defines a cluster's desired state, like which container images to run.
-Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
+In this quickstart, you will use a manifest to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments:
+* The sample Azure Vote Python applications.
+* A Redis instance.
-```console
-kubectl apply -f azure-vote.yaml
-```
+Two Kubernetes Services are also created:
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
-The following example output shows the Deployments and Services created successfully:
+1. In the Cloud Shell, use an editor to create a file named `azure-vote.yaml`, such as:
+ * `code azure-vote.yaml`
+ * `nano azure-vote.yaml`, or
+ * `vi azure-vote.yaml`.
-```output
-deployment "azure-vote-back" created
-service "azure-vote-back" created
-deployment "azure-vote-front" created
-service "azure-vote-front" created
-```
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
+ app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-front
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
+
+1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
+
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
+
+ Output shows the successfully created deployments and
+
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
## Test the application
To monitor progress, use the `kubectl get service` command with the `--watch` ar
kubectl get service azure-vote-front --watch ```
-Initially the *EXTERNAL-IP* for the *azure-vote-front* service is shown as *pending*.
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
```output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s ```
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+ ```output azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
To see the Azure Vote app in action, open a web browser to the external IP addre
## Monitor health and logs
-When you created the cluster, Azure Monitor for containers was enabled. This monitoring feature provides health metrics for both the AKS cluster and pods running on the cluster.
+When you created the cluster, Azure Monitor for containers was enabled. Azure Monitor for containers provides health metrics for both the AKS cluster and pods running on the cluster.
-It may take a few minutes for this data to populate in the Azure portal. To see current status, uptime, and resource usage for the Azure Vote pods, browse back to the AKS resource in the Azure portal, such as *myAKSCluster*. You can then access the health status as follows:
+Metric data takes a few minutes to populate in the Azure portal. To see current health status, uptime, and resource usage for the Azure Vote pods:
-1. Under **Monitoring** on the left-hand side, choose **Insights**
-1. Across the top, choose to **+ Add Filter**
-1. Select *Namespace* as the property, then choose *\<All but kube-system\>*
-1. Choose to view the **Containers**.
+1. Browse back to the AKS resource in the Azure portal.
+1. Under **Monitoring** on the left-hand side, choose **Insights**.
+1. Across the top, choose to **+ Add Filter**.
+1. Select **Namespace** as the property, then choose *\<All but kube-system\>*.
+1. Select **Containers** to view them.
-The *azure-vote-back* and *azure-vote-front* containers are displayed, as shown in the following example:
+The `azure-vote-back` and `azure-vote-front` containers will display, as shown in the following example:
![View the health of running containers in AKS](media/kubernetes-walkthrough-portal/monitor-containers.png)
-To see logs for the `azure-vote-front` pod, select the **View container logs** from the drop down of the containers list. These logs include the *stdout* and *stderr* streams from the container.
+To view logs for the `azure-vote-front` pod, select **View container logs** from the containers list drop-down. These logs include the *stdout* and *stderr* streams from the container.
![View the containers logs in AKS](media/kubernetes-walkthrough-portal/monitor-container-logs.png) ## Delete cluster
-When the cluster is no longer needed, delete the cluster resource, which deletes all associated resources. This operation can be completed in the Azure portal by selecting the **Delete** button on the AKS cluster dashboard. Alternatively, the [az aks delete][az-aks-delete] command can be used in the Cloud Shell:
+To avoid Azure charges, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az aks delete][az-aks-delete] command in the Cloud Shell:
```azurecli az aks delete --resource-group myResourceGroup --name myAKSCluster --no-wait ```- > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
## Get the code
-In this quickstart, pre-created container images were used to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are available on GitHub.
-
-[https://github.com/Azure-Samples/azure-voting-app-redis][azure-vote-app]
+Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and deployed a multi-container application to it.
+In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it. Access the Kubernetes web dashboard for your AKS cluster.
+ To learn more about AKS by walking through a complete example, including building an application, deploying from Azure Container Registry, updating a running application, and scaling and upgrading your cluster, continue to the Kubernetes cluster tutorial.
aks Kubernetes Walkthrough Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-powershell.md
Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 01/13/2021 Last updated : 03/15/2021
# Quickstart: Deploy an Azure Kubernetes Service cluster using PowerShell
-In this quickstart, you deploy an Azure Kubernetes Service (AKS) cluster using PowerShell. AKS is a
-managed Kubernetes service that lets you quickly deploy and manage clusters. A multi-container
-application that includes a web frontend and a Redis instance is run in the cluster. You then see
-how to monitor the health of the cluster and pods that run your application.
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+* Deploy an AKS cluster using PowerShell.
+* Run a multi-container application with a web front-end and a Redis instance in the cluster.
+* Monitor the health of the cluster and pods that run your application.
To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers][windows-container-powershell].
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information
-about installing the Az PowerShell module, see
-[Install Azure PowerShell][install-azure-powershell].
+If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
[!INCLUDE [cloud-shell-try-it](../../includes/cloud-shell-try-it.md)]
-If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
-should be billed. Select a specific subscription ID using the
+If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. ```azurepowershell-interactive
Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
## Create a resource group
-An [Azure resource group](../azure-resource-manager/management/overview.md)
-is a logical group in which Azure resources are deployed and managed. When you create a resource
-group, you are asked to specify a location. This location is where resource group metadata is
-stored, it is also where your resources run in Azure if you don't specify another region during
-resource creation. Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
-cmdlet.
+An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
+* The storage location of your resource group metadata.
+* Where your resources will run in Azure if you don't specify another region during resource creation.
The following example creates a resource group named **myResourceGroup** in the **eastus** region.
+Create a resource group using the [New-AzResourceGroup][new-azresourcegroup]
+cmdlet.
+ ```azurepowershell-interactive New-AzResourceGroup -Name myResourceGroup -Location eastus ```
-The following example output shows the resource group created successfully:
+Output for successfully created resource group:
```plaintext ResourceGroupName : myResourceGroup
ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resource
## Create AKS cluster
-Use the `ssh-keygen` command-line utility to generate an SSH key pair. For more details, see
-[Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
+1. Generate an SSH key pair using the `ssh-keygen` command-line utility.
+ * For more details, see [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
-Use the [New-AzAks][new-azaks] cmdlet to create an AKS cluster. The
-following example creates a cluster named **myAKSCluster** with one node. Azure Monitor for
-containers is also enabled by default. This takes several minutes to complete.
+1. Create an AKS cluster using the [New-AzAks][new-azaks] cmdlet. Azure Monitor for containers is enabled by default.
-> [!NOTE]
-> When creating an AKS cluster, a second resource group is automatically created to store the AKS
-> resources. For more information, see
-> [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks)
+ The following example creates a cluster named **myAKSCluster** with one node.
-```azurepowershell-interactive
-New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
-```
+ ```azurepowershell-interactive
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
+ ```
After a few minutes, the command completes and returns information about the cluster.
+> [!NOTE]
+> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks)
+ ## Connect to the cluster
-To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If
-you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the
-`Install-AzAksKubectl` cmdlet:
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-```azurepowershell
-Install-AzAksKubectl
-```
+1. Install `kubectl` locally using the `Install-AzAksKubectl` cmdlet:
-To configure `kubectl` to connect to your Kubernetes cluster, use the
-[Import-AzAksCredential][import-azakscredential] cmdlet. The following
-example downloads credentials and configures the Kubernetes CLI to use them.
+ ```azurepowershell
+ Install-AzAksKubectl
+ ```
-```azurepowershell-interactive
-Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
-```
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
-To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a
-list of the cluster nodes.
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
-```azurepowershell-interactive
-.\kubectl get nodes
-```
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
-The following example output shows the single node created in the previous steps. Make sure that the
-status of the node is **Ready**:
+ ```azurepowershell-interactive
+ .\kubectl get nodes
+ ```
-```plaintext
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
-```
+ Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
+
+ ```plaintext
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
+ ```
## Run the application
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to
-run. In this quickstart, a manifest is used to create all objects needed to run the Azure Vote
-application. This manifest includes two [Kubernetes deployments][kubernetes-deployment] - one for
-the sample Azure Vote Python applications, and the other for a Redis instance. Two
-[Kubernetes Services is also created - an internal service for the Redis
-instance, and an external service to access the Azure Vote application from the internet.
-
-Create a file named `azure-vote.yaml` and copy in the following YAML definition. If you use the
-Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or
-physical system:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-back
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-back
-spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-front
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
metadata:
- labels:
- app: azure-vote-front
+ name: azure-vote-front
spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-front
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
-```
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
-Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your
-YAML manifest:
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-```azurepowershell-interactive
-.\kubectl apply -f azure-vote.yaml
-```
+ ```azurepowershell-interactive
+ .\kubectl apply -f azure-vote.yaml
+ ```
-The following example output shows the Deployments and Services created successfully:
+ Output shows the successfully created deployments and
-```plaintext
-deployment.apps/azure-vote-back created
-service/azure-vote-back created
-deployment.apps/azure-vote-front created
-service/azure-vote-front created
-```
+ ```plaintext
+ deployment.apps/azure-vote-back created
+ service/azure-vote-back created
+ deployment.apps/azure-vote-front created
+ service/azure-vote-front created
+ ```
## Test the application
-When the application runs, a Kubernetes service exposes the application frontend to the internet.
-This process can take a few minutes to complete.
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
```azurepowershell-interactive .\kubectl get service azure-vote-front --watch ```
-Initially the **EXTERNAL-IP** for the **azure-vote-front** service is shown as **pending**.
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
```plaintext NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s ```
-When the **EXTERNAL-IP** address changes from **pending** to an actual public IP address, use `CTRL-C`
-to stop the `kubectl` watch process. The following example output shows a valid public IP address
-assigned to the service:
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
```plaintext azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
To see the Azure Vote app in action, open a web browser to the external IP addre
![Voting app deployed in Azure Kubernetes Service](./media/kubernetes-walkthrough-powershell/voting-app-deployed-in-azure-kubernetes-service.png)
-When the AKS cluster was created,
-[Azure Monitor for containers](../azure-monitor/containers/container-insights-overview.md) was enabled
-to capture health metrics for both the cluster nodes and pods. These health metrics are available in
-the Azure portal.
+View the cluster nodes' and pods' health metrics captured by Azure Monitor for containers in the Azure portal.
## Delete the cluster
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer
-needed, use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource
-group, container service, and all related resources.
+To avoid Azure charges, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroup ``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster
-> is not removed. For steps on how to remove the service principal, see
-> [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity,
-> the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
## Get the code
-In this quickstart, pre-created container images were used to create a Kubernetes deployment. The
-related application code, Dockerfile, and Kubernetes manifest file are available on GitHub.
-
-[https://github.com/Azure-Samples/azure-voting-app-redis][azure-vote-app]
+Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and deployed a multi-container application to
-it. You can also [access the Kubernetes web dashboard][kubernetes-dashboard] for your AKS cluster.
+In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it. [Access the Kubernetes web dashboard][kubernetes-dashboard] for your AKS cluster.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the
-Kubernetes cluster tutorial.
+To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
aks Kubernetes Walkthrough Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-rm-template.md
Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS) Previously updated : 01/13/2021 Last updated : 03/15/2021
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you deploy an AKS cluster using an Azure Resource Manager template (ARM template). A multi-container application that includes a web front end and a Redis instance is run in the cluster.
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+* Deploy an AKS cluster using an Azure Resource Manager template.
+* Run a multi-container application with a web front-end and a Redis instance in the cluster.
![Image of browsing to Azure Vote](media/container-service-kubernetes-walkthrough/azure-voting-application.png)
If your environment meets the prerequisites and you're familiar with using ARM t
### Create an SSH key pair
-To access AKS nodes, you connect using an SSH key pair. Use the `ssh-keygen` command to generate SSH public and private key files. By default, these files are created in the *~/.ssh* directory. If an SSH key pair with the same name exists in the given location, those files are overwritten.
+To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location.
-Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
+1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
-The following command creates an SSH key pair using RSA encryption and a bit length of 2048:
+1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 2048:
-```console
-ssh-keygen -t rsa -b 2048
-```
+ ```console
+ ssh-keygen -t rsa -b 2048
+ ```
For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys].
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
## Deploy the template
-1. Select the following image to sign in to Azure and open a template.
+1. Select the following button to sign in to Azure and open a template.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-aks%2Fazuredeploy.json)
It takes a few minutes to create the AKS cluster. Wait for the cluster to be suc
### Connect to the cluster
-To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-```azurecli
-az aks install-cli
-```
+1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+ ```azurecli
+ az aks install-cli
+ ```
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
-To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a list of the cluster nodes.
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
-```console
-kubectl get nodes
-```
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
-The following example output shows the nodes created in the previous steps. Make sure that the status for all the nodes is *Ready*:
+ ```console
+ kubectl get nodes
+ ```
-```output
-NAME STATUS ROLES AGE VERSION
-aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
-aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
-aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
-```
+ Output shows the nodes created in the previous steps. Make sure that the status for all the nodes is *Ready*:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6
+ aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
+ aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6
+ ```
### Run the application
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. In this quickstart, a manifest is used to create all objects needed to run the Azure Vote application. This manifest includes two [Kubernetes deployments][kubernetes-deployment] - one for the sample Azure Vote Python applications, and the other for a Redis instance. Two [Kubernetes Services][kubernetes-service] are also created - an internal service for the Redis instance, and an external service to access the Azure Vote application from the internet.
-
-Create a file named `azure-vote.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-back
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `vi` or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-back
-spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-front
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
metadata:
- labels:
- app: azure-vote-front
+ name: azure-vote-front
spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-front
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
-```
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
-Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-```console
-kubectl apply -f azure-vote.yaml
-```
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
-The following example output shows the Deployments and Services created successfully:
+ Output shows the successfully created deployments and
-```output
-deployment "azure-vote-back" created
-service "azure-vote-back" created
-deployment "azure-vote-front" created
-service "azure-vote-front" created
-```
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
### Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
```console kubectl get service azure-vote-front --watch ```
-Initially the *EXTERNAL-IP* for the *azure-vote-front* service is shown as *pending*.
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
```output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s ```
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
```output azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
To see the Azure Vote app in action, open a web browser to the external IP addre
## Clean up resources
-When the cluster is no longer needed, use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+To avoid Azure charges, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
## Get the code
-In this quickstart, pre-created container images were used to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are available on GitHub.
-
-[https://github.com/Azure-Samples/azure-voting-app-redis][azure-vote-app]
+Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and deployed a multi-container application to it. [Access the Kubernetes web dashboard][kubernetes-dashboard] for the cluster you created.
+In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it. [Access the Kubernetes web dashboard][kubernetes-dashboard] for your AKS cluster.
To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
aks Kubernetes Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough.md
Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI. Previously updated : 01/12/2021 Last updated : 02/26/2021
# Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI
-In this quickstart, you deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI. AKS is a managed Kubernetes service that lets you quickly deploy and manage clusters. A multi-container application that includes a web front end and a Redis instance is run in the cluster. You then see how to monitor the health of the cluster and pods that run your application.
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+* Deploy an AKS cluster using the Azure CLI.
+* Run a multi-container application with a web front-end and a Redis instance in the cluster.
+* Monitor the health of the cluster and pods that run your application.
-To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers][windows-container-cli].
-
-![Voting app deployed in Azure Kubernetes Service](./media/container-service-kubernetes-walkthrough/voting-app-deployed-in-azure-kubernetes-service.png)
+ ![Voting app deployed in Azure Kubernetes Service](./media/container-service-kubernetes-walkthrough/voting-app-deployed-in-azure-kubernetes-service.png)
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers][windows-container-cli].
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.0.64 or greater of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
> [!NOTE]
-> If running the commands in this quickstart locally (instead of Azure Cloud Shell), ensure you run the commands as administrator.
+> Run the commands as administrator if you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell.
## Create a resource group
-An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
+An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
+* The storage location of your resource group metadata.
+* Where your resources will run in Azure if you don't specify another region during resource creation.
The following example creates a resource group named *myResourceGroup* in the *eastus* location.
+Create a resource group using the [az group create][az-group-create] command.
++ ```azurecli-interactive az group create --name myResourceGroup --location eastus ```
-Output similar to the following example indicates the resource group has been created successfully:
+Output for successfully created resource group:
```json {
Output similar to the following example indicates the resource group has been cr
} ```
-## Create AKS cluster
+## Enable cluster monitoring
-Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
+1. Verify *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* are registered on your subscription. To check the registration status:
-> [!NOTE]
-> [Azure Monitor for containers][azure-monitor-containers] is enabled using the *--enable-addons monitoring* parameter, which requires *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* to be registered on you subscription. To check the registration status:
->
-> ```azurecli
-> az provider show -n Microsoft.OperationsManagement -o table
-> az provider show -n Microsoft.OperationalInsights -o table
-> ```
->
-> If they are not registered, use the following command to register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights*:
->
-> ```azurecli
-> az provider register --namespace Microsoft.OperationsManagement
-> az provider register --namespace Microsoft.OperationalInsights
-> ```
+ ```azurecli
+ az provider show -n Microsoft.OperationsManagement -o table
+ az provider show -n Microsoft.OperationalInsights -o table
+ ```
+
+ If they are not registered, register *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* using:
+
+ ```azurecli
+ az provider register --namespace Microsoft.OperationsManagement
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+2. Enable [Azure Monitor for containers][azure-monitor-containers] using the *--enable-addons monitoring* parameter.
+
+## Create AKS cluster
+
+Create an AKS cluster using the [az aks create][az-aks-create] command. The following example creates a cluster named *myAKSCluster* with one node:
```azurecli-interactive az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count
After a few minutes, the command completes and returns JSON-formatted information about the cluster. > [!NOTE]
-> When creating an AKS cluster a second resource group is automatically created to store the AKS resources. For more information see [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks)
+> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](./faq.md#why-are-two-resource-groups-created-with-aks)
## Connect to the cluster
-To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-```azurecli
-az aks install-cli
-```
+1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+ ```azurecli
+ az aks install-cli
+ ```
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
+ * Downloads credentials and configures the Kubernetes CLI to use them.
+ * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file*.
-> [!NOTE]
-> The above command uses the default location for the [Kubernetes configuration file][kubeconfig-file], which is `~/.kube/config`. You can specify a different location for your Kubernetes configuration file using *--file*.
-To verify the connection to your cluster, use the [kubectl get][kubectl-get] command to return a list of the cluster nodes.
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
-```azurecli-interactive
-kubectl get nodes
-```
+3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
-The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
-```output
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
-```
+ Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
+ ```
## Run the application
-A [Kubernetes manifest file][kubernetes-deployment] defines a desired state for the cluster, such as what container images to run. In this quickstart, a manifest is used to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment] - one for the sample Azure Vote Python applications, and the other for a Redis instance. Two [Kubernetes Services][kubernetes-service] are also created - an internal service for the Redis instance, and an external service to access the Azure Vote application from the internet.
-
-Create a file named `azure-vote.yaml` and copy in the following YAML definition. If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-back
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-back
- template:
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services][kubernetes-service] are also created:
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml`.
+ * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+1. Copy in the following YAML definition:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-back
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-back
+ template:
+ metadata:
+ labels:
+ app: azure-vote-back
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-back
+ image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ env:
+ - name: ALLOW_EMPTY_PASSWORD
+ value: "yes"
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 6379
+ name: redis
+
+ apiVersion: v1
+ kind: Service
metadata:
- labels:
+ name: azure-vote-back
+ spec:
+ ports:
+ - port: 6379
+ selector:
app: azure-vote-back
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: azure-vote-front
spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 6379
- name: redis
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-back
-spec:
- ports:
- - port: 6379
- selector:
- app: azure-vote-back
-
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-front
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- template:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: azure-vote-front
+ template:
+ metadata:
+ labels:
+ app: azure-vote-front
+ spec:
+ nodeSelector:
+ "beta.kubernetes.io/os": linux
+ containers:
+ - name: azure-vote-front
+ image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 80
+ env:
+ - name: REDIS
+ value: "azure-vote-back"
+
+ apiVersion: v1
+ kind: Service
metadata:
- labels:
- app: azure-vote-front
+ name: azure-vote-front
spec:
- nodeSelector:
- "beta.kubernetes.io/os": linux
- containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-front
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
-```
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: azure-vote-front
+ ```
-Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
-```console
-kubectl apply -f azure-vote.yaml
-```
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
-The following example output shows the Deployments and Services created successfully:
+ Output shows the successfully created deployments and
-```output
-deployment "azure-vote-back" created
-service "azure-vote-back" created
-deployment "azure-vote-front" created
-service "azure-vote-front" created
-```
+ ```output
+ deployment "azure-vote-back" created
+ service "azure-vote-back" created
+ deployment "azure-vote-front" created
+ service "azure-vote-front" created
+ ```
## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-To monitor progress, use the [kubectl get service][kubectl-get] command with the `--watch` argument.
+Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
```azurecli-interactive kubectl get service azure-vote-front --watch ```
-Initially the *EXTERNAL-IP* for the *azure-vote-front* service is shown as *pending*.
+The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
```output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s ```
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
```output azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
To see the Azure Vote app in action, open a web browser to the external IP addre
![Voting app deployed in Azure Kubernetes Service](./media/container-service-kubernetes-walkthrough/voting-app-deployed-in-azure-kubernetes-service.png)
-When the AKS cluster was created, [Azure Monitor for containers][azure-monitor-containers] was enabled to capture health metrics for both the cluster nodes and pods. These health metrics are available in the Azure portal.
+View the cluster nodes' and pods' health metrics captured by [Azure Monitor for containers][azure-monitor-containers] in the Azure portal.
## Delete the cluster
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+To avoid Azure charges, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
## Get the code
-In this quickstart, pre-created container images were used to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are available on GitHub.
-
-[https://github.com/Azure-Samples/azure-voting-app-redis][azure-vote-app]
+Pre-existing container images were used in this quickstart to create a Kubernetes deployment. The related application code, Dockerfile, and Kubernetes manifest file are [available on GitHub.][azure-vote-app]
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and deployed a multi-container application to it. You can also [access the Kubernetes web dashboard][kubernetes-dashboard] for your AKS cluster.
+In this quickstart, you deployed a Kubernetes cluster and then deployed a multi-container application to it. [Access the Kubernetes web dashboard][kubernetes-dashboard] for your AKS cluster.
To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quickstart-helm.md
description: Use Helm with AKS and Azure Container Registry to package and run a
Previously updated : 01/12/2021 Last updated : 03/15/2021 # Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm
-[Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers such as *APT* and *Yum*, Helm is used to manage Kubernetes charts, which are packages of preconfigured Kubernetes resources.
+[Helm][helm] is an open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. Similar to Linux package managers like *APT* and *Yum*, Helm manages Kubernetes charts, which are packages of pre-configured Kubernetes resources.
-This article shows you how to use Helm to package and run an application on AKS. For more details on installing an existing application using Helm, see [Install existing applications with Helm in AKS][helm-existing].
+In this quickstart, you'll use Helm to package and run an application on AKS. For more details on installing an existing application using Helm, see the [Install existing applications with Helm in AKS][helm-existing] how-to guide.
## Prerequisites
This article shows you how to use Helm to package and run an application on AKS.
* [Helm v3 installed][helm-install]. ## Create an Azure Container Registry
-To use Helm to run your application in your AKS cluster, you need an Azure Container Registry to store your container images. The below example uses [az acr create][az-acr-create] to create an ACR named *MyHelmACR* in the *MyResourceGroup* resource group with the *Basic* SKU. You should provide your own unique registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+You'll need to store your container images in an Azure Container Registry (ACR) to run your application in your AKS cluster using Helm. Provide your own registry name unique within Azure and containing 5-50 alphanumeric characters. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+
+The below example uses [az acr create][az-acr-create] to create an ACR named *MyHelmACR* in *MyResourceGroup* with the *Basic* SKU.
```azurecli az group create --name MyResourceGroup --location eastus az acr create --resource-group MyResourceGroup --name MyHelmACR --sku Basic ```
-The output is similar to the following example. Make a note of the *loginServer* value for your ACR since it will be used in a later step. In the below example, *myhelmacr.azurecr.io* is the *loginServer* for *MyHelmACR*.
+Output will be similar to the following example. Take note of your *loginServer* value for your ACR since you'll use it in a later step. In the below example, *myhelmacr.azurecr.io* is the *loginServer* for *MyHelmACR*.
```console {
The output is similar to the following example. Make a note of the *loginServer*
} ```
-## Create an Azure Kubernetes Service cluster
+## Create an AKS cluster
+
+Your new AKS cluster needs access to your ACR to pull the container images and run them. Use the following command to:
+* Create an AKS cluster called *MyAKS* and attach *MyHelmACR*.
+* Grant the *MyAKS* cluster access to your *MyHelmACR* ACR.
-Create an AKS cluster. The below command creates an AKS cluster called MyAKS and attaches MyHelmACR.
```azurecli az aks create -g MyResourceGroup -n MyAKS --location eastus --attach-acr MyHelmACR --generate-ssh-keys ```
-Your AKS cluster needs access to your ACR to pull the container images and run them. The above command also grants the *MyAKS* cluster access to your *MyHelmACR* ACR.
- ## Connect to your AKS cluster
-To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
+To connect a Kubernetes cluster locally, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [az aks install-cli][] command:
+1. Install `kubectl` locally using the `az aks install-cli` command:
-```azurecli
-az aks install-cli
-```
+ ```azurecli
+ az aks install-cli
+ ```
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][] command. The following example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*:
+2. Configure `kubectl` to connect to your Kubernetes cluster using the `az aks get-credentials` command. The following command example gets credentials for the AKS cluster named *MyAKS* in the *MyResourceGroup*:
-```azurecli
-az aks get-credentials --resource-group MyResourceGroup --name MyAKS
-```
+ ```azurecli
+ az aks get-credentials --resource-group MyResourceGroup --name MyAKS
+ ```
## Download the sample application
cd dev-spaces/samples/nodejs/getting-started/webfrontend
## Create a Dockerfile
-Create a new *Dockerfile* file using the following:
+Create a new *Dockerfile* file using the following commands:
```dockerfile FROM node:latest
CMD ["node","server.js"]
## Build and push the sample application to the ACR
-Use the [az acr build][az-acr-build] command to build and push an image to the registry, using the preceding Dockerfile. The `.` at the end of the command sets the location of the Dockerfile, in this case the current directory.
+Using the preceding Dockerfile, run the [az acr build][az-acr-build] command to build and push an image to the registry. The `.` at the end of the command sets the location of the Dockerfile (in this case, the current directory).
```azurecli az acr build --image webfrontend:v1 \
Generate your Helm chart using the `helm create` command.
helm create webfrontend ```
-Make the following updates to *webfrontend/values.yaml*. Substitute the loginServer of your registry that you noted in an earlier step, such as *myhelmacr.azurecr.io*:
-
+Update *webfrontend/values.yaml*:
+* Replace the loginServer of your registry that you noted in an earlier step, such as *myhelmacr.azurecr.io*.
* Change `image.repository` to `<loginServer>/webfrontend` * Change `service.type` to `LoadBalancer`
appVersion: v1
## Run your Helm chart
-Use the `helm install` command to install your application using your Helm chart.
+Install your application using your Helm chart using the `helm install` command.
```console helm install webfrontend webfrontend/ ```
-It takes a few minutes for the service to return a public IP address. To monitor the progress, use the `kubectl get service` command with the *watch* parameter:
+It takes a few minutes for the service to return a public IP address. Monitor progress using the `kubectl get service` command with the `--watch` argument.
```console $ kubectl get service --watch
webfrontend LoadBalancer 10.0.141.72 <pending> 80:32150/TCP 2m
webfrontend LoadBalancer 10.0.141.72 <EXTERNAL-IP> 80:32150/TCP 7m ```
-Navigate to the load balancer of your application in a browser using the `<EXTERNAL-IP>` to see the sample application.
+Navigate to your application's load balancer in a browser using the `<EXTERNAL-IP>` to see the sample application.
## Delete the cluster
-When the cluster is no longer needed, use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, the container registry, the container images stored there, and all related resources.
+Use the [az group delete][az-group-delete] command to remove the resource group, the AKS cluster, the container registry, the container images stored in the ACR, and all related resources.
```azurecli-interactive az group delete --name MyResourceGroup --yes --no-wait ``` > [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete]. If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
+>
+> If you used a managed identity, the identity is managed by the platform and does not require removal.
## Next steps
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
**How often should I expect to upgrade Kubernetes versions to stay in support?**
-Stating with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments, at a minimum. This means starting with AKS clusters on 1.19, you will be able to upgrade at a minimum of once a year to stay on a supported version. For versions on 1.18 or below, the window of support remains at 9 months which requires an upgrade once every 9 months to stay on a supported version. It is highly recommended to regularly test new versions and be prepared to upgrade to newer versions to capture the latest stable enhancements within Kubernetes.
+Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments, at a minimum. This means starting with AKS clusters on 1.19, you will be able to upgrade at a minimum of once a year to stay on a supported version. For versions on 1.18 or below, the window of support remains at 9 months which requires an upgrade once every 9 months to stay on a supported version. It is highly recommended to regularly test new versions and be prepared to upgrade to newer versions to capture the latest stable enhancements within Kubernetes.
**What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported?**
app-service Deploy Ci Cd Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-ci-cd-custom-container.md
Title: CI/CD to custom Linux containers
-description: Learn how to set up continuous deployment to a custom Linux container in Azure App Service. Continuous deployment is supported for Docker Hub and ACR.
+ Title: CI/CD to custom containers
+description: Set up continuous deployment to a custom Windows or Linux container in Azure App Service.
keywords: azure app service, linux, docker, acr,oss ms.assetid: a47fb43a-bbbd-4751-bdc1-cd382eae49f8 Previously updated : 11/08/2018 Last updated : 03/12/2021
+zone_pivot_groups: app-service-containers-windows-linux
-# Continuous deployment with Web App for Containers
+# Continuous deployment with custom containers in Azure App Service
In this tutorial, you configure continuous deployment for a custom container image from managed [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) repositories or [Docker Hub](https://hub.docker.com).
-## Enable continuous deployment with ACR
+## 1. Go to Deployment Center
-![Screenshot of ACR webhook](./media/deploy-ci-cd-custom-container/ci-cd-acr-02.png)
+In the [Azure portal](https://portal.azure.com), navigate to the management page for your App Service app.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select the **App Service** option on the left side of the page.
-3. Select the name of the app for which you want to configure continuous deployment.
-4. On the **Container Settings** page, select **Single Container**
-5. Select **Azure Container Registry**
-6. Select **Continuous Deployment > On**
-7. Select **Save** to enable continuous deployment.
+From the left menu, click **Deployment Center** > **Settings**.
-## Use the ACR webhook
+## 2. Choose deployment source
-Once Continuous Deployment has been enabled, you can view the newly created webhook on your Azure Container Registry webhooks page.
+**Choose** the deployment source depends on your scenario:
+- **Container registry** sets up CI/CD between your container registry and App Service.
+- The **GitHub Actions** option is for you if you maintain the source code for your container image in GitHub. Triggered by new commits to your GitHub repository, the deploy action can run `docker build` and `docker push` directly to your container registry, then update your App Service app to run the new image. For more information, see [How CI/CD works with GitHub Actions](#how-cicd-works-with-github-actions).
+- To set up CI/CD with **Azure Pipelines**, see [Deploy an Azure Web App Container from Azure Pipelines](/devops/pipelines/targets/webapp-on-container-linux).
-![Screenshot that shows where you can view the newly created webhook on your Azure Container Registry webhooks page.](./media/deploy-ci-cd-custom-container/ci-cd-acr-03.png)
+> [!NOTE]
+> For a Docker Compose app, select **Container Registry**.
-In your Container Registry, click on Webhooks to view the current webhooks.
+If you choose GitHub Actions, **click** **Authorize** and follow the authorization prompts. If you've already authorized with GitHub before, you can deploy from a different user's repository by clicking **Change Account**.
-## Enable continuous deployment with Docker Hub (optional)
+Once you authorize your Azure account with GitHub, **select** the **Organization**, **Repository**, and **Branch** to deploy from.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select the **App Service** option on the left side of the page.
-3. Select the name of the app for which you want to configure continuous deployment.
-4. On the **Container Settings** page, select **Single Container**
-5. Select **Docker Hub**
-6. Select **Continuous Deployment > On**
-7. Select **Save** to enable continuous deployment.
+## 2. Configure registry settings
+## 3. Configure registry settings
-![Screenshot of app setting](./media/deploy-ci-cd-custom-container/ci-cd-docker-02.png)
+To deploy a multi-container (Docker Compose) app, **select** **Docker Compose** in **Container Type**.
-Copy the Webhook URL. To add a webhook for Docker Hub, follow <a href="https://docs.docker.com/docker-hub/webhooks/" target="_blank">webhooks for Docker Hub</a>.
+If you don't see the **Container Type** dropdown, scroll back up to **Source** and **select** **Container Registry**.
+
+In **Registry source**, **select** where your container registry is. If it's neither Azure Container Registry nor Docker Hub, **select** **Private Registry**.
+
+> [!NOTE]
+> If your multi-container (Docker Compose) app uses more than one private image, make sure the private images are in the same private registry and accessible with the same user credentials. If your multi-container app only uses public images, **select** **Docker Hub**, even if some images are not in Docker Hub.
+
+Follow the next steps by selecting the tab that matches your choice.
+
+# [Azure Container Registry](#tab/acr)
+
+The **Registry** dropdown displays the registries in the same subscription as your app. **Select** the registry you want.
+
+> [!NOTE]
+> To deploy from a registry in a different subscription, **select** **Private Registry** in **Registry source** instead.
+
+**Select** the **Image** and **Tag** to deploy. If you want, **type** the start up command in **Startup File**.
+Follow the next step depending on the **Container Type**:
+- For **Docker Compose**, **select** the registry for your private images. **Click** **Choose file** to upload your [Docker Compose file](https://docs.docker.com/compose/compose-file/), or just **paste** the content of your Docker Compose file into **Config**.
+- For **Single Container**, **select** the **Image** and **Tag** to deploy. If you want, **type** the start up command in **Startup File**.
+
+App Service appends the string in **Startup File** to [the end of the `docker run` command (as the `[COMMAND] [ARG...]` segment)](https://docs.docker.com/engine/reference/run/) when starting your container.
+
+# [Docker Hub](#tab/dockerhub)
+
+In **Repository Access**, **select** whether the image to deploy is public or private.
+In **Repository Access**, **select** whether the image to deploy is public or private. For a Docker Compose app with one or more private images, **select** **Private**.
+
+If you select a private image, **specify** the **Login** (username) and **Password** of the Docker account.
+
+**Supply** the image and tag name in **Full Image Name and Tag**, separated by a `:` (for example, `nginx:latest`). If you want, **type** the start up command in **Startup File**.
+Follow the next step depending on the **Container Type**:
+- For **Docker Compose**, **select** the registry for your private images. **Click** **Choose file** to upload your [Docker Compose file](https://docs.docker.com/compose/compose-file/), or just **paste** the content of your Docker Compose file into **Config**.
+- For **Single Container**, **supply** the image and tag name in **Full Image Name and Tag**, separated by a `:` (for example, `nginx:latest`). If you want, **type** the start up command in **Startup File**.
+
+App Service appends the string in **Startup File** to [the end of the `docker run` command (as the `[COMMAND] [ARG...]` segment)](https://docs.docker.com/engine/reference/run/) when starting your container.
+
+# [Private Registry](#tab/private)
+
+In **Server URL**, **type** the URL of the server, beginning with **https://**.
+
+In **Login**, and **Password** **type** your login credentials for your private registry.
+
+**Supply** the image and tag name in **Full Image Name and Tag**, separated by a `:` (for example, `nginx:latest`). If you want, **type** the start up command in **Startup File**.
+Follow the next step depending on the **Container Type**:
+- For **Docker Compose**, **select** the registry for your private images. **Click** **Choose file** to upload your [Docker Compose file](https://docs.docker.com/compose/compose-file/), or just **paste** the content of your Docker Compose file into **Config**.
+- For **Single Container**, **supply** the image and tag name in **Full Image Name and Tag**, separated by a `:` (for example, `nginx:latest`). If you want, **type** the start up command in **Startup File**.
+
+App Service appends the string in **Startup File** to [the end of the `docker run` command (as the `[COMMAND] [ARG...]` segment)](https://docs.docker.com/engine/reference/run/) when starting your container.
+
+--
+
+## 3. Enable CI/CD
+## 4. Enable CI/CD
+
+App Service supports CI/CD integration with Azure Container Registry and Docker Hub. To enable it, **select** **On** in **Continuous deployment**.
+
+> [!NOTE]
+> If you select **GitHub Actions** in **Source**, you don't get this option because CI/CD is handled by GitHub Actions directly. Instead, you see a **Workflow Configuration** section, where you can **click** **Preview file** to inspect the workflow file. Azure commits this file into your selected GitHub source repository to handle build and deploy tasks. For more information, see [How CI/CD works with GitHub Actions](#how-cicd-works-with-github-actions).
+
+When you enable this option, App Service adds a webhook to your repository in Azure Container Registry or Docker Hub. Your repository posts to this webhook whenever your selected image is updated with `docker push`. The webhook causes your App Service app to restart and run `docker pull` to get the updated image.
+
+**For other private registries**, your can post to the webhook manually or as a step in a CI/CD pipeline. In **Webhook URL**, **click** the **Copy** button to get the webhook URL.
+
+> [!NOTE]
+> Support for multi-container (Docker Compose) apps is limited:
+> - For Azure Container Registry, App Service creates a webhook in the selected registry with the registry as the scope. A `docker push` to any repository in the registry (including the ones not referenced by your Docker Compose file) triggers an app restart. You may want to [modify the webhook](../container-registry/container-registry-webhook.md) to a narrower scope.
+> - Docker Hub doesn't support webhooks at the registry level. You must **add** the webhooks manually to the images specified in your Docker Compose file.
+
+## 4. Save your settings
+## 5. Save your settings
+
+**Click** **Save**.
++
+## How CI/CD works with GitHub Actions
+
+If you choose **GitHub Actions** in **Source** (see [Choose deployment source](#2-choose-deployment-source)), App Service sets up CI/CD in the following ways:
+
+- Deposits a GitHub Actions workflow file into your GitHub repository to handle build and deploy tasks to App Service.
+- Adds the credentials for your private registry as GitHub secrets. The generated workflow file runs the [Azure/docker-login](https://github.com/Azure/docker-login) action to sign in with your private registry, then runs `docker push` to deploy to it.
+- Adds the publishing profile for your app as a GitHub secret. The generated workflow file uses this secret to authenticate with App Service, then runs the [Azure/webapps-deploy](https://github.com/Azure/webapps-deploy) action to configure the updated image, which triggers an app restart to pull in the updated image.
+- Captures information from the [workflow run logs](https://docs.github.com/actions/managing-workflow-runs/using-workflow-run-logs) and displays it in the **Logs** tab in your app's **Deployment Center**.
+
+You can customize the GitHub Actions build provider in the following ways:
+
+- Customize the workflow file after it's generated in your GitHub repository. For more information, see [Workflow syntax for GitHub Actions](https://docs.github.com/actions/reference/workflow-syntax-for-github-actions). Just make sure that the workflow ends with the [Azure/webapps-deploy](https://github.com/Azure/webapps-deploy) action to trigger an app restart.
+- If the selected branch is protected, you can still preview the workflow file without saving the configuration, then add it and the required GitHub secrets into your repository manually. This method doesn't give you the log integration with the Azure portal.
+- Instead of a publishing profile, deploy using a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) in Azure Active Directory.
+
+#### Authenticate with a service principal
+
+This optional configuration replaces the default authentication with publishing profiles in the generated workflow file.
+
+**Generate** a service principal with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). In the following example, replace *\<subscription-id>*, *\<group-name>*, and *\<app-name>* with your own values. **Save** the entire JSON output for the next step, including the top-level `{}`.
+
+```azurecli-interactive
+az ad sp create-for-rbac --name "myAppDeployAuth" --role contributor \
+ --scopes /subscriptions/<subscription-id>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name> \
+ --sdk-auth
+```
+
+> [!IMPORTANT]
+> For security, grant the minimum required access to the service principal. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
+
+In [GitHub](https://github.com/), **browse** to your repository, then **select** **Settings > Secrets > Add a new secret**. **Paste** the entire JSON output from the Azure CLI command into the secret's value field. **Give** the secret a name like `AZURE_CREDENTIALS`.
+
+In the workflow file generated by the **Deployment Center**, **revise** the `azure/webapps-deploy` step with code like the following example:
+
+```yaml
+- name: Sign in to Azure
+# Use the GitHub secret you added
+- uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+- name: Deploy to Azure Web App
+# Remove publish-profile
+- uses: azure/webapps-deploy@v2
+ with:
+ app-name: '<app-name>'
+ slot-name: 'production'
+ images: '<registry-server>/${{ secrets.AzureAppService_ContainerUsername_... }}/<image>:${{ github.sha }}'
+ - name: Sign out of Azure
+ run: |
+ az logout
+```
+ ## Automate with CLI
-To configure CI/CD using the Azure CLI, run the [az webapp deployment container config](/cli/azure/webapp/deployment/container#az-webapp-deployment-container-config) command to generate the webhook URL. The URL can be used to configure your DockerHub or Azure Container Registry.
+To configure the container registry and the Docker image, **run** [az webapp config container set](/cli/azure/webapp/config/container#az-webapp-config-container-set).
+
+# [Azure Container Registry](#tab/acr)
+
+```azurecli-interactive
+az webapp config container set --name <app-name> --resource-group <group-name> --docker-custom-image-name '<image>:<tag>' --docker-registry-server-url 'https://<registry-name>.azurecr.io' --docker-registry-server-user '<username>' --docker-registry-server-password '<password>'
+```
+
+# [Docker Hub](#tab/dockerhub)
+
+```azurecli-interactive
+# Public image
+az webapp config container set --name <app-name> --resource-group <group-name> --docker-custom-image-name <image-name>
+
+# Private image
+az webapp config container set --name <app-name> --resource-group <group-name> --docker-custom-image-name <image-name> --docker-registry-server-user <username> --docker-registry-server-password <password>
+```
+
+# [Private Registry](#tab/private)
+
+```azurecli-interactive
+az webapp config container set --name <app-name> --resource-group <group-name> --docker-custom-image-name '<image>:<tag>' --docker-registry-server-url <private-repo-url> --docker-registry-server-user <username> --docker-registry-server-password <password>
+```
+
+--
+
+To configure a multi-container (Docker Compose) app, **prepare** a Docker Compose file locally, then **run** [az webapp config container set](/cli/azure/webapp/config/container#az-webapp-config-container-set) with the `--multicontainer-config-file` parameter. If your Docker Compose file contains private images, **add** `--docker-registry-server-*` parameters as shown in the previous example.
+
+```azurecli-interactive
+az webapp config container set --resource-group <group-name> --name <app-name> --multicontainer-config-file <docker-compose-file>
+```
+
+To configure CI/CD from the container registry to your app, **run** [az webapp deployment container config](/cli/azure/webapp/deployment/container#az-webapp-deployment-container-config) with the `--enable-cd` parameter. The command outputs the webhook URL, but you must create the webhook in your registry manually in a separate step. The following example enables CI/CD on your app, then uses the webhook URL in the output to create the webhook in Azure Container Registry.
```azurecli-interactive
-az webapp deployment container config --name <app-name> --resource-group <group-name> --enable-cd true
+ci_cd_url=$(az webapp deployment container config --name <app-name> --resource-group <group-name> --enable-cd true --query CI_CD_URL --output tsv)
+
+az acr webhook create --name <webhook-name> --registry <registry-name> --resource-group <group-name> --actions push --uri $ci_cd_url --scope '<image>:<tag>'
```
-## Next steps
+## More resources
* [Azure Container Registry](https://azure.microsoft.com/services/container-registry/)
-* [Create a .NET Core web app in App Service on Linux](quickstart-dotnetcore.md?pivots=platform-linux)
-* [Create a Ruby web app in App Service on Linux](quickstart-ruby.md)
-* [Quickstart: Run a custom container on App Service](quickstart-custom-container.md?pivots=container-linux)
+* [Create a .NET Core web app in App Service on Linux](quickstart-dotnetcore.md)
+* [Quickstart: Run a custom container on App Service](quickstart-custom-container.md)
* [App Service on Linux FAQ](faq-app-service-linux.md)
-* [Configure custom Linux containers](configure-custom-container.md)
+* [Configure custom containers](configure-custom-container.md)
+* [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples)
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-container-github-action.md
- Title: Custom container CI/CD from GitHub Actions
-description: Learn how to use GitHub Actions to deploy your custom Linux container to App Service from a CI/CD pipeline.
- Previously updated : 12/04/2020------
-# Deploy a custom container to App Service using GitHub Actions
-
-[GitHub Actions](https://docs.github.com/en/actions) gives you the flexibility to build an automated software development workflow. With the [Azure Web Deploy action](https://github.com/Azure/webapps-deploy), you can automate your workflow to deploy custom containers to [App Service](overview.md) using GitHub Actions.
-
-A workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that are in the workflow.
-
-For an Azure App Service container workflow, the file has three sections:
-
-|Section |Tasks |
-|||
-|**Authentication** | 1. Retrieve a service principal or publish profile. <br /> 2. Create a GitHub secret. |
-|**Build** | 1. Create the environment. <br /> 2. Build the container image. |
-|**Deploy** | 1. Deploy the container image. |
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- A GitHub account. If you don't have one, sign up for [free](https://github.com/join). You need to have code in a GitHub repository to deploy to Azure App Service. -- A working container registry and Azure App Service app for containers. This example uses Azure Container Registry. Make sure to complete the full deployment to Azure App Service for containers. Unlike regular web apps, web apps for containers do not have a default landing page. Publish the container to have a working example.
- - [Learn how to create a containerized Node.js application using Docker, push the container image to a registry, and then deploy the image to Azure App Service](/azure/developer/javascript/tutorial-vscode-docker-node-01)
-
-## Generate deployment credentials
-
-The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal but the process requires more steps.
-
-Save your publish profile credential or service principal as a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) to authenticate with Azure. You'll access the secret within your workflow.
-
-# [Publish profile](#tab/publish-profile)
-
-A publish profile is an app-level credential. Set up your publish profile as a GitHub secret.
-
-1. Go to your app service in the Azure portal.
-
-1. On the **Overview** page, select **Get Publish profile**.
-
- > [!NOTE]
- > As of October 2020, Linux web apps will need the app setting `WEBSITE_WEBDEPLOY_USE_SCM` set to `true` **before downloading the file**. This requirement will be removed in the future. See [Configure an App Service app in the Azure portal](./configure-common.md), to learn how to configure common web app settings.
-
-1. Save the downloaded file. You'll use the contents of the file to create a GitHub secret.
-
-# [Service principal](#tab/service-principal)
-
-You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
-
-```azurecli-interactive
-az ad sp create-for-rbac --name "myApp" --role contributor \
- --scopes /subscriptions/<subscription-id>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name> \
- --sdk-auth
-```
-
-In the example, replace the placeholders with your subscription ID, resource group name, and app name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later.
-
-```output
- {
- "clientId": "<GUID>",
- "clientSecret": "<GUID>",
- "subscriptionId": "<GUID>",
- "tenantId": "<GUID>",
- (...)
- }
-```
-
-> [!IMPORTANT]
-> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
--
-## Configure the GitHub secret for authentication
-
-# [Publish profile](#tab/publish-profile)
-
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
-
-To use [app-level credentials](#generate-deployment-credentials), paste the contents of the downloaded publish profile file into the secret's value field. Name the secret `AZURE_WEBAPP_PUBLISH_PROFILE`.
-
-When you configure your GitHub workflow, you use the `AZURE_WEBAPP_PUBLISH_PROFILE` in the deploy Azure Web App action. For example:
-
-```yaml
-- uses: azure/webapps-deploy@v2
- with:
- publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
-```
-
-# [Service principal](#tab/service-principal)
-
-In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
-
-To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name like `AZURE_CREDENTIALS`.
-
-When you configure the workflow file later, you use the secret for the input `creds` of the Azure Login action. For example:
-
-```yaml
-- uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
-```
---
-## Configure GitHub secrets for your registry
-
-Define secrets to use with the Docker Login action. The example in this document uses Azure Container Registry for the container registry.
-
-1. Go to your container in the Azure portal or Docker and copy the username and password. You can find the Azure Container Registry username and password in the Azure portal under **Settings** > **Access keys** for your registry.
-
-2. Define a new secret for the registry username named `REGISTRY_USERNAME`.
-
-3. Define a new secret for the registry password named `REGISTRY_PASSWORD`.
-
-## Build the Container image
-
-The following example show part of the workflow that builds a Node.JS Docker image. Use [Docker Login](https://github.com/azure/docker-login) to log into a private container registry. This example uses Azure Container Registry but the same action works for other registries.
--
-```yaml
-name: Linux Container Node Workflow
-
-on: [push]
-
-jobs:
- build:
- runs-on: ubuntu-latest
-
- steps:
- - uses: actions/checkout@v2
- - uses: azure/docker-login@v1
- with:
- login-server: mycontainer.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
- - run: |
- docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
- docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
-```
-
-You can also use [Docker Login](https://github.com/azure/docker-login) to log into multiple container registries at the same time. This example includes two new GitHub secrets for authentication with docker.io. The example assumes that there is a Dockerfile at the root level of the registry.
-
-```yml
-name: Linux Container Node Workflow
-
-on: [push]
-
-jobs:
- build:
- runs-on: ubuntu-latest
-
- steps:
- - uses: actions/checkout@v2
- - uses: azure/docker-login@v1
- with:
- login-server: mycontainer.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
- - uses: azure/docker-login@v1
- with:
- login-server: index.docker.io
- username: ${{ secrets.DOCKERIO_USERNAME }}
- password: ${{ secrets.DOCKERIO_PASSWORD }}
- - run: |
- docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
- docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
-```
-
-## Deploy to an App Service container
-
-To deploy your image to a custom container in App Service, use the `azure/webapps-deploy@v2` action. This action has seven parameters:
-
-| **Parameter** | **Explanation** |
-|||
-| **app-name** | (Required) Name of the App Service app |
-| **publish-profile** | (Optional) Applies to Web Apps(Windows and Linux) and Web App Containers(linux). Multi container scenario not supported. Publish profile (\*.publishsettings) file contents with Web Deploy secrets |
-| **slot-name** | (Optional) Enter an existing Slot other than the Production slot |
-| **package** | (Optional) Applies to Web App only: Path to package or folder. \*.zip, \*.war, \*.jar or a folder to deploy |
-| **images** | (Required) Applies to Web App Containers only: Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'python:3.7.2-alpine/'. For a multi-container app, multiple container image names can be provided (multi-line separated) |
-| **configuration-file** | (Optional) Applies to Web App Containers only: Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container apps. |
-| **startup-command** | (Optional) Enter the start-up command. For ex. dotnet run or dotnet filename.dll |
-
-# [Publish profile](#tab/publish-profile)
-
-```yaml
-name: Linux Container Node Workflow
-
-on: [push]
-
-jobs:
- build:
- runs-on: ubuntu-latest
-
- steps:
- - uses: actions/checkout@v2
-
- - uses: azure/docker-login@v1
- with:
- login-server: mycontainer.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
-
- - run: |
- docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
- docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
-
- - uses: azure/webapps-deploy@v2
- with:
- app-name: 'myapp'
- publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
- images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}'
-```
-# [Service principal](#tab/service-principal)
-
-```yaml
-on: [push]
-
-name: Linux_Container_Node_Workflow
-
-jobs:
- build-and-deploy:
- runs-on: ubuntu-latest
- steps:
- # checkout the repo
- - name: 'Checkout GitHub Action'
- uses: actions/checkout@main
-
- - name: 'Login via Azure CLI'
- uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
-
- - uses: azure/docker-login@v1
- with:
- login-server: mycontainer.azurecr.io
- username: ${{ secrets.REGISTRY_USERNAME }}
- password: ${{ secrets.REGISTRY_PASSWORD }}
- - run: |
- docker build . -t mycontainer.azurecr.io/myapp:${{ github.sha }}
- docker push mycontainer.azurecr.io/myapp:${{ github.sha }}
-
- - uses: azure/webapps-deploy@v2
- with:
- app-name: 'myapp'
- images: 'mycontainer.azurecr.io/myapp:${{ github.sha }}'
-
- - name: Azure logout
- run: |
- az logout
-```
---
-## Next steps
-
-You can find our set of Actions grouped into different repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.
--- [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples)--- [Azure login](https://github.com/Azure/login)--- [Azure WebApp](https://github.com/Azure/webapps-deploy)--- [Docker login/logout](https://github.com/Azure/docker-login)--- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)--- [K8s deploy](https://github.com/Azure/k8s-deploy)--- [Starter Workflows](https://github.com/actions/starter-workflows)
app-service Deploy Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-continuous-deployment.md
Title: Configure continuous deployment
description: Learn how to enable CI/CD to Azure App Service from GitHub, BitBucket, Azure Repos, or other repos. Select the build pipeline that fits your needs. ms.assetid: 6adb5c84-6cf3-424e-a336-c554f23b4000 Previously updated : 03/03/2021 Last updated : 03/12/2021
You can customize the GitHub Actions build provider in the following ways:
#### Authenticate with a service principal
+This optional configuration replaces the default authentication with publishing profiles in the generated workflow file.
+ 1. Generate a service principal with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). In the following example, replace *\<subscription-id>*, *\<group-name>*, and *\<app-name>* with your own values: ```azurecli-interactive
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine Windows agent
+ Title: Overview of the Connected Machine agent
description: This article provides a detailed overview of the Azure Arc enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 02/18/2021 Last updated : 03/15/2021
The following versions of the Windows and Linux operating system are officially
### Required permissions
-* To onboard machines, you are a member of the **Azure Connected Machine Onboarding** role.
+* To onboard machines, you are a member of the **Azure Connected Machine Onboarding** or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
-* To read, modify, and delete a machine, you are a member of the **Azure Connected Machine Resource Administrator** role.
+* To read, modify, and delete a machine, you are a member of the **Azure Connected Machine Resource Administrator** role in the resource group.
+
+* To select a resource group from the drop-down list when using the **Generate script** method, at a minimum you are a member of the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group.
### Azure subscription and service limits
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc enabled servers agent description: This article has release notes for Azure Arc enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 12/21/2020 Last updated : 03/15/2021 # What's new with Azure Arc enabled servers agent
The Azure Arc enabled servers Connected Machine agent receives improvements on a
- Known issues - Bug fixes
+## March 2021
+
+Version 1.4
+
+## New feature
+
+- Added support for private endpoints.
+- Expanded list of exit codes for azcmagent.
+- Agent configuration parameters can now be read from a file with the --config parameter.
+
+## Fixed
+
+Network endpoint checks are now faster.
+ ## December 2020 Version: 1.3 ### New feature
-Added support for Windows Server 2008 R2
+Added support for Windows Server 2008 R2.
### Fixed
Version: 1.1
- Fixed proxy script to handle alternate GC daemon unit file location. - GuestConfig agent reliability changes. - GuestConfig agent support for US Gov Virginia region.-- GuestConfig agent extension report messages to be more verbose in case of failures.
+- GuestConfig agent extension report messages to be more verbose if there is a failure.
## September 2020
azure-functions Durable Functions Unit Testing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-unit-testing.md
Last updated 11/03/2019
Unit testing is an important part of modern software development practices. Unit tests verify business logic behavior and protect from introducing unnoticed breaking changes in the future. Durable Functions can easily grow in complexity so introducing unit tests will help to avoid breaking changes. The following sections explain how to unit test the three function types - Orchestration client, orchestrator, and activity functions. > [!NOTE]
-> This article provides guidance for unit testing for Durable Functions apps targeting Durable Functions 1.x. It has not yet been updated to account for changes introduced in Durable Functions 2.x. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
+> This article provides guidance for unit testing for Durable Functions apps targeting Durable Functions 2.x. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
## Prerequisites
The examples in this article require knowledge of the following concepts and fra
## Base classes for mocking
-Mocking is supported via three abstract classes in Durable Functions 1.x:
+Mocking is supported via the following interface:
-* `DurableOrchestrationClientBase`
+* [IDurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.IDurableOrchestrationClient), [IDurableEntityClient](/dotnet/api/microsoft.azure.webjobs.IDurableEntityClient) and [IDurableClient](/dotnet/api/microsoft.azure.webjobs.IDurableClient)
-* `DurableOrchestrationContextBase`
+* [IDurableOrchestrationContext](/dotnet/api/microsoft.azure.webjobs.IDurableOrchestrationContext)
-* `DurableActivityContextBase`
+* [IDurableActivityContext](/dotnet/api/microsoft.azure.webjobs.IDurableActivityContext)
+
+* [IDurableEntityContext](/dotnet/api/microsoft.azure.webjobs.IDurableEntityContext)
-These classes are base classes for `DurableOrchestrationClient`, `DurableOrchestrationContext`, and `DurableActivityContext` that define Orchestration Client, Orchestrator, and Activity methods. The mocks will set expected behavior for base class methods so the unit test can verify the business logic. There is a two-step workflow for unit testing the business logic in the Orchestration Client and Orchestrator:
-
-1. Use the base classes instead of the concrete implementation when defining orchestration client and orchestrator function signatures.
-2. In the unit tests mock the behavior of the base classes and verify the business logic.
-
-Find more details in the following paragraphs for testing functions that use the orchestration client binding and the orchestrator trigger binding.
+These interfaces can be used with the various trigger and bindings supported by Durable Functions. When executing your Azure Functions, the functions runtime will run your function code with a concrete implementation of these interfaces. For unit testing, you can pass in a mocked version of these interfaces to test your business logic.
## Unit testing trigger functions
In this section, the unit test will validate the logic of the following HTTP tri
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/HttpStart.cs)]
-The unit test task will be to verify the value of the `Retry-After` header provided in the response payload. So the unit test will mock some of `DurableOrchestrationClientBase` methods to ensure predictable behavior.
+The unit test task will be to verify the value of the `Retry-After` header provided in the response payload. So the unit test will mock some of `IDurableClient` methods to ensure predictable behavior.
-First, a mock of the base class is required, `DurableOrchestrationClientBase`. The mock can be a new class that implements `DurableOrchestrationClientBase`. However, using a mocking framework like [moq](https://github.com/moq/moq4) simplifies the process:
+First, we use a mocking framework ([moq](https://github.com/moq/moq4) in this case) to mock `IDurableClient`:
```csharp
- // Mock DurableOrchestrationClientBase
- var durableOrchestrationClientBaseMock = new Mock<DurableOrchestrationClientBase>();
+// Mock IDurableClient
+var durableClientMock = new Mock<IDurableClient>();
```
+> [!NOTE]
+> While you can mock interfaces by directly implementing the interface as a class, mocking frameworks simplify the process in various ways. For instance, if a new method is added to the interface across minor releases, moq will not require any code changes unlike concrete implementations.
+ Then `StartNewAsync` method is mocked to return a well-known instance ID. ```csharp
- // Mock StartNewAsync method
- durableOrchestrationClientBaseMock.
- Setup(x => x.StartNewAsync(functionName, It.IsAny<object>())).
- ReturnsAsync(instanceId);
+// Mock StartNewAsync method
+durableClientMock.
+ Setup(x => x.StartNewAsync(functionName, It.IsAny<object>())).
+ ReturnsAsync(instanceId);
``` Next `CreateCheckStatusResponse` is mocked to always return an empty HTTP 200 response. ```csharp
- // Mock CreateCheckStatusResponse method
- durableOrchestrationClientBaseMock
- .Setup(x => x.CreateCheckStatusResponse(It.IsAny<HttpRequestMessage>(), instanceId))
- .Returns(new HttpResponseMessage
+// Mock CreateCheckStatusResponse method
+durableClientMock
+ // Notice that even though the HttpStart function does not call IDurableClient.CreateCheckStatusResponse()
+ // with the optional parameter returnInternalServerErrorOnFailure, moq requires the method to be set up
+ // with each of the optional parameters provided. Simply use It.IsAny<> for each optional parameter
+ .Setup(x => x.CreateCheckStatusResponse(It.IsAny<HttpRequestMessage>(), instanceId, returnInternalServerErrorOnFailure: It.IsAny<bool>())
+ .Returns(new HttpResponseMessage
+ {
+ StatusCode = HttpStatusCode.OK,
+ Content = new StringContent(string.Empty),
+ Headers =
{
- StatusCode = HttpStatusCode.OK,
- Content = new StringContent(string.Empty),
- Headers =
- {
- RetryAfter = new RetryConditionHeaderValue(TimeSpan.FromSeconds(10))
- }
- });
+ RetryAfter = new RetryConditionHeaderValue(TimeSpan.FromSeconds(10))
+ }
+ });
``` `ILogger` is also mocked: ```csharp
- // Mock ILogger
- var loggerMock = new Mock<ILogger>();
+// Mock ILogger
+var loggerMock = new Mock<ILogger>();
``` Now the `Run` method is called from the unit test: ```csharp
- // Call Orchestration trigger function
- var result = await HttpStart.Run(
- new HttpRequestMessage()
- {
- Content = new StringContent("{}", Encoding.UTF8, "application/json"),
- RequestUri = new Uri("http://localhost:7071/orchestrators/E1_HelloSequence"),
- },
- durableOrchestrationClientBaseMock.Object,
- functionName,
- loggerMock.Object);
+// Call Orchestration trigger function
+var result = await HttpStart.Run(
+ new HttpRequestMessage()
+ {
+ Content = new StringContent("{}", Encoding.UTF8, "application/json"),
+ RequestUri = new Uri("http://localhost:7071/orchestrators/E1_HelloSequence"),
+ },
+ durableClientMock.Object,
+ functionName,
+ loggerMock.Object);
``` The last step is to compare the output with the expected value: ```csharp
- // Validate that output is not null
- Assert.NotNull(result.Headers.RetryAfter);
+// Validate that output is not null
+Assert.NotNull(result.Headers.RetryAfter);
- // Validate output's Retry-After header value
- Assert.Equal(TimeSpan.FromSeconds(10), result.Headers.RetryAfter.Delta);
+// Validate output's Retry-After header value
+Assert.Equal(TimeSpan.FromSeconds(10), result.Headers.RetryAfter.Delta);
``` After combining all steps, the unit test will have the following code:
In this section the unit tests will validate the output of the `E1_HelloSequence
The unit test code will start with creating a mock: ```csharp
- var durableOrchestrationContextMock = new Mock<DurableOrchestrationContextBase>();
+var durableOrchestrationContextMock = new Mock<IDurableOrchestrationContext>();
``` Then the activity method calls will be mocked: ```csharp
- durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "Tokyo")).ReturnsAsync("Hello Tokyo!");
- durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "Seattle")).ReturnsAsync("Hello Seattle!");
- durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "London")).ReturnsAsync("Hello London!");
+durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "Tokyo")).ReturnsAsync("Hello Tokyo!");
+durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "Seattle")).ReturnsAsync("Hello Seattle!");
+durableOrchestrationContextMock.Setup(x => x.CallActivityAsync<string>("E1_SayHello", "London")).ReturnsAsync("Hello London!");
``` Next the unit test will call `HelloSequence.Run` method: ```csharp
- var result = await HelloSequence.Run(durableOrchestrationContextMock.Object);
+var result = await HelloSequence.Run(durableOrchestrationContextMock.Object);
``` And finally the output will be validated: ```csharp
- Assert.Equal(3, result.Count);
- Assert.Equal("Hello Tokyo!", result[0]);
- Assert.Equal("Hello Seattle!", result[1]);
- Assert.Equal("Hello London!", result[2]);
+Assert.Equal(3, result.Count);
+Assert.Equal("Hello Tokyo!", result[0]);
+Assert.Equal("Hello Seattle!", result[1]);
+Assert.Equal("Hello London!", result[2]);
``` After combining all steps, the unit test will have the following code:
In this section the unit test will validate the behavior of the `E1_SayHello` Ac
[!code-csharp[Main](~/samples-durable-functions/samples/precompiled/HelloSequence.cs)]
-And the unit tests will verify the format of the output. The unit tests can use the parameter types directly or mock `DurableActivityContextBase` class:
+And the unit tests will verify the format of the output. The unit tests can use the parameter types directly or mock `IDurableActivityContext` class:
[!code-csharp[Main](~/samples-durable-functions/samples/VSSample.Tests/HelloSequenceActivityTests.cs)]
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-app-portal.md
Next, create a function in the new function app.
1. From the left menu of the **Functions** window, select **Functions**, then select **Add** from the top menu.
-1. From the **New Function** window, select **Http trigger**.
+1. From the **Add Function** window, select the **Http trigger** template.
![Choose HTTP trigger function](./media/functions-create-first-azure-function/function-app-select-http-trigger.png)
-1. In the **New Function** window, accept the default name for **New Function**, or enter a new name.
-
-1. Choose **Anonymous** from the **Authorization level** drop-down list, and then select **Create Function**.
+1. Under **Template details** use `HttpExample` for **New Function**, choose **Anonymous** from the **[Authorization level](functions-bindings-http-webhook-trigger.md#authorization-keys)** drop-down list, and then select **Add**.
Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request.
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
else
New-AzureADServiceAppRoleAssignment -Id $myApp.AppRoles[0].Id -ResourceId $myServicePrincipal.ObjectId -ObjectId $actionGroupsSP.ObjectId -PrincipalId $actionGroupsSP.ObjectId
-Write-Host "My Azure AD Application ($myApp.ObjectId): " + $myApp.ObjectId
+Write-Host "My Azure AD Application (ObjectId): " + $myApp.ObjectId
Write-Host "My Azure AD Application's Roles" Write-Host $myApp.AppRoles ```
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-action-rules.md
Title: Action rules for Azure Monitor alerts description: Understanding what action rules in Azure Monitor are and how to configure and manage them. Previously updated : 04/25/2019 Last updated : 03/15/2021
The available filters are:
* **Severity** This rule will apply only to alerts with the selected severities.
-For example, **Severity = Sev1** means that the rule will apply only to alerts with Sev1 severity.
-* **Monitor Service**
+For example, **severity = Sev1** means that the rule will apply only to alerts with Sev1 severity.
+* **Monitor service**
This rule will apply only to alerts coming from the selected monitoring services.
-For example, **Monitor Service = ΓÇ£Azure BackupΓÇ¥** means that the rule will apply only to backup alerts (coming from Azure Backup).
-* **Resource Type**
+For example, **monitor service = ΓÇ£Azure BackupΓÇ¥** means that the rule will apply only to backup alerts (coming from Azure Backup).
+* **Resource type**
This rule will apply only to alerts on the selected resource types.
-For example, **Resource Type = ΓÇ£Virtual MachinesΓÇ¥** means that the rule will apply only to alerts on virtual machines.
-* **Alert Rule ID**
+For example, **resource type = ΓÇ£Virtual MachinesΓÇ¥** means that the rule will apply only to alerts on virtual machines.
+* **Alert rule ID**
This rule will apply only to alerts coming from a specific alert rule. The value should be the Resource Manager ID of the alert rule.
-For example, **Alert Rule ID = "/subscriptions/SubId1/resourceGroups/ResourceGroup1/providers/microsoft.insights/metricalerts/API-Latency"** means this rule will apply only to alerts coming from "API-Latency" metric alert rule.
-* **Monitor Condition**
-This rule will apply only to alert events with the specified monitor condition - either **Fired** or **Resolved**.
+For example, **alert rule ID = "/subscriptions/SubId1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/API-Latency"** means this rule will apply only to alerts coming from "API-Latency" metric alert rule.
+_NOTE - you can get the proper alert rule ID by listing your alert rules from the CLI, or by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value._
+* **Monitor condition**
+This rule will apply only to alert events with the specified monitor condition - either **Fired** or **Resolved**.
* **Description** This rule will apply only to alerts that contains a specific string in the alert description field. That field contains the alert rule description.
-For example, **Description contains 'prod'** means that the rule will only match alerts that contain the string "prod" in their description.
-* **Alert Context (payload)**
+For example, **description contains 'prod'** means that the rule will only match alerts that contain the string "prod" in their description.
+* **Alert context (payload)**
This rule will apply only to alerts that contain any of one or more specific values in the alert context fields.
-For example, **Alert context (payload) contains 'Computer-01'** means that the rule will only apply to alerts whose payload contain the string "Computer-01".
+For example, **alert context (payload) contains 'Computer-01'** means that the rule will only apply to alerts whose payload contain the string "Computer-01".
-If you set multiple filters in a rule, all of them apply. For example, if you set **Resource type' = Virtual Machines** and **Severity' = Sev0**, then the rule will apply only for Sev0 alerts on virtual machines.
+If you set multiple filters in a rule, all of them apply. For example, if you set **resource type' = Virtual Machines** and **severity' = Sev0**, then the rule will apply only for Sev0 alerts on virtual machines.
![Action rule filters](media/alerts-action-rules/action-rules-new-rule-creation-flow-filters.png)
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 01/21/2021 Last updated : 03/15/2021 # Troubleshooting problems in Azure Monitor metric alerts
To avoid having the deployment fail when trying to validate the custom metricΓÇÖ
## Export the Azure Resource Manager template of a metric alert rule via the Azure portal Exporting the Resource Manager template of a metric alert rule helps you understand its JSON syntax and properties, and can be used to automate future deployments.
-1. Navigate to the **Resource Groups** section in the portal, and select the resource group containing the rule.
-2. In the Overview section, check the **Show hidden types** checkbox.
-3. In the **Type** filter, select *microsoft.insights/metricalerts*.
-4. Select the relevant alert rule to view its details.
-5. Under **Settings**, select **Export template**.
+1. In the Azure portal, open the alert rule to view its details.
+2. Click **Properties**.
+3. Under **Automation**, select **Export template**.
## Metric alert rules quota too small
When the lower bound has a negative value, this means that it's plausible for th
## Next steps -- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
+- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
azure-monitor Itsmc Dashboard Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-dashboard-errors.md
The following sections describe common errors that appear in the connector statu
* When a new ITSMC instance is created, it starts syncing information from the ITSM system, such as work item templates and work items. [Sync ITSMC to generate a new refresh token](./itsmc-resync-servicenow.md). * [Review your connection details in ITSMC](./itsmc-connections-servicenow.md#create-a-connection) and check that ITSMC can successfully [sync](./itsmc-resync-servicenow.md).++
+## IP restrictions
+**Error**: "Failed to add ITSM Connection named "XXX" due to Bad Request. Error: Bad request. Invalid parameters provided for connection. Http Exception: Status Code Forbidden."
+
+**Cause**: The IP address of ITSM application is not allow ITSM connections from partners ITSM tools.
+
+**Resolution**: In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/download/details.aspx?id=56519) For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-getting-started.md
By default, the chart shows the most recent 24 hours of metrics data. Use the **
See [examples of the charts](../essentials/metric-chart-samples.md) that have filtering and splitting applied. The article shows the steps were used to configure the charts.
+## Share your metric chart
+There are currently two ways to share your metric chart. Below are the instructions on how to share information from your metrics charts through Excel and a link.
+
+### Download to Excel
+Click "Share" and select "Download to Excel". Your download should start immediately.
+
+![screenshot on how to share metric chart via excel](./media/metrics-getting-started/share-excel.png)
+
+### Share a link
+Click "Share" and select "Copy link". You should get a notification that the link was copied successfully.
+
+![screenshot on how to share metric chart via link](./media/metrics-getting-started/share-link.png)
++ ## Advanced chart settings You can customize chart style, title, and modify advanced chart settings. When done with customization, pin it to a dashboard to save your work. You can also configure metrics alerts. Follow [product documentation](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
You can customize chart style, title, and modify advanced chart settings. When d
* [Viewing multiple resources in Metrics Explorer](./metrics-dynamic-scope.md) * [Troubleshooting Metrics Explorer](metrics-troubleshoot.md) * [See a list of available metrics for Azure services](./metrics-supported.md)
-* [See examples of configured charts](../essentials/metric-chart-samples.md)
+* [See examples of configured charts](../essentials/metric-chart-samples.md)
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-troubleshoot.md
+
+ Title: Troubleshoot VM insights
+description: Troubleshoot VM insights installation.
+++ Last updated : 03/15/2021++++
+# Troubleshoot VM insights
+This article provides troubleshooting information for when you have problems enabling or using VM insights.
+
+## Cannot enable VM Insights on a machine
+When onboarding an Azure virtual machine from the Azure portal, the following steps occur:
+
+- A default Log Analytics workspace is created if that option was selected.
+- The Log Analytics agent is installed on Azure virtual machines using a VM extension if the agent is already installed.
+- The Dependency agent is installed on Azure virtual machines using an extension, if determined it is required.
+
+During the onboarding process, each of these steps is verified and a notification status shown in the portal. Configuration of the workspace and the agent installation typically takes 5 to 10 minutes. It will take another 5 to 10 minutes for data to become available to view in the portal.
+
+If you receive a message that the virtual machine needs to be onboarded after you've performed the onboarding process, allow for up to 30 minutes for the process to be completed. If the issue persists, then see the following sections for possible causes.
+
+### Is the virtual machine running?
+ If the virtual machine has been turned off for a while, is off currently, or was only recently turned on then you won't have data to display for a bit until data arrives.
+
+### Is the operating system supported?
+If the operating system is not in the list of [supported operating systems](vminsights-enable-overview.md#supported-operating-systems) then the extension will fail to install and you will see this message that we are waiting for data to arrive.
+
+### Did the extension install properly?
+If you still see a message that the virtual machine needs to be onboarded, it may mean that one or both of the extensions failed to install correctly. Check the **Extensions** page for your virtual machine in the Azure portal to verify that the following extensions are listed.
+
+| Operating system | Agents |
+|:|:|
+| Windows | MicrosoftMonitoringAgent<br>Microsoft.Azure.Monitoring.DependencyAgent |
+| Linux | OMSAgentForLinux<br>DependencyAgentForLinux |
+
+If you do not see the both extensions for your operating system in the list of installed extensions, then they need to be installed. If the extensions are listed but their status does not appear as *Provisioning succeeded*, then the extension should be removed and reinstalled.
+
+### Do you have connectivity issues?
+For Windows machines, you can use the *TestCloudConnectivity* tool to identify connectivity issue. This tool is installed by default with the agent in the folder *%SystemRoot%\Program Files\Microsoft Monitoring Agent\Agent*. Run the tool from an elevated command prompt. It will return results and highlight where the test fails.
+
+![TestCloudConnectivity tool](media/vminsights-troubleshoot/test-cloud-connectivity.png)
+
+### More agent troubleshooting
+
+See the following articles for troubleshooting issues with the Log Analytics agent:
+
+- [How to troubleshoot issues with the Log Analytics agent for Windows](../agents/agent-windows-troubleshoot.md)
+- [How to troubleshoot issues with the Log Analytics agent for Linux](../agents/agent-linux-troubleshoot.md)
+
+## Performance view has no data
+If the agents appear to be installed correctly but you don't see any data in the Performance view, then see the following sections for possible causes.
+
+### Has your Log Analytics workspace reached its data limit?
+Check the [capacity reservations and the pricing for data ingestion](https://azure.microsoft.com/pricing/details/monitor/).
+
+### Is your virtual machine sending log and performance data to Azure Monitor Logs?
+
+Open Log Analytics from **Logs** in the Azure Monitor menu in the Azure portal. Run the following query for your computer:
+
+```kuso
+Usage
+| where Computer == "my-computer"
+| summarize sum(Quantity), any(QuantityUnit) by DataType
+```
+
+If you don't see any data, then you may have problems with your agent. See the section above for agent troubleshooting information.
+
+## Virtual machine doesn't appear in map view
+
+### Is the Dependency agent installed?
+ Use the information in the sections above to determine if the Dependency agent is installed and working properly.
+
+### Are you on the Log Analytics free tier?
+The [Log Analytics free tier](https://azure.microsoft.com/pricing/details/monitor/) This is a legacy pricing plan that allows for up to five unique Service Map machines. Any subsequent machines won't appear in Service Map, even if the prior five are no longer sending data.
+
+### Is your virtual machine sending log and performance data to Azure Monitor Logs?
+Use the log query in the [Performance view has no data](#performance-view-has-no-data) section to determine if data is being collected for the virtual machine. If not data is being collected, use the TestCloudConnectivity tool described above to determine if you have connectivity issues.
++
+## Virtual machine appears in map view but has missing data
+If the virtual machine is in the map view, then the Dependency agent is installed and running, but the kernel driver didn't load. Check the log file at the following locations:
+
+| Operating system | Log |
+|:|:|
+| Windows | C:\Program Files\Microsoft Dependency Agent\logs\wrapper.log |
+| Linux | /var/opt/microsoft/dependency-agent/log/service.log |
+
+The last lines of the file should indicate why the kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.
+## Next steps
+
+- For details on onboarding VM insights agents, see [Enable VM insights overview](vminsights-enable-overview.md).
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-audio.md
Azure Percept Audio is an accessory device that adds speech AI capabilities to the Azure Percept DK. It contains a preconfigured audio processor and a four-microphone linear array, enabling you to apply voice commanding, keyword spotting, and far field speech to local listening devices using Azure Cognitive Services. Azure Percept Audio enables device manufacturers to extend Azure Percept DK beyond vision capabilities to new, smart voice-activated devices. It is integrated out-of-the-box with Azure Percept DK, Azure Percept Studio, and other Azure edge management services. It is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
+> [!div class="nextstepaction"]
+> [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
+ :::image type="content" source="./media/overview-azure-percept-audio/percept-audio.png" alt-text="Azure Percept Audio device."::: ## Azure Percept Audio components
Build a [no-code speech solution](./tutorial-no-code-speech.md) in [Azure Percep
## Next steps
-Order an Azure Percept Audio device at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
+> [!div class="nextstepaction"]
+> [Buy an Azure Percept Audio device from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-dk.md
Azure Percept DK is an edge AI and IoT development kit designed for developing vision and audio AI proof of concepts. When combined with [Azure Percept Studio](./overview-azure-percept-studio.md) and [Azure Percept Audio](./overview-azure-percept-audio.md), it becomes a powerful yet simple-to-use platform for building edge AI solutions for a wide range of vision or audio AI applications. It is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
+> [!div class="nextstepaction"]
+> [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
+ :::image type="content" source="./media/overview-azure-percept-dk/dk-image.png" alt-text="Azure Percept DK device."::: ## Key Features
Azure Percept DK is an edge AI and IoT development kit designed for developing v
## Next steps
-Order an Azure Percept DK at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
+> [!div class="nextstepaction"]
+> [Buy an Azure Percept DK from the Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept.md
The main components of Azure Percept are:
- A development kit that is flexible enough to support a wide variety of prototyping scenarios for device builders, solution builders and customers.
+ > [!div class="nextstepaction"]
+ > [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
+ 3. Services and workflows to accelerate edge AI model and solution development. - Development workflows and pre-built models accessible from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 09/02/2020 Last updated : 03/15/2021 # Azure subscription and service limits, quotas, and constraints
azure-resource-manager Resource Providers And Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-providers-and-types.md
Title: Resource providers and resource types description: Describes the resource providers that support Azure Resource Manager. It describes their schemas, available API versions, and the regions that can host the resources. Previously updated : 12/04/2020 Last updated : 03/15/2021
For a list that maps resource providers to Azure services, see [Resource provide
## Register resource provider
-Before using a resource provider, your Azure subscription must be registered for the resource provider. Registration configures your subscription to work with the resource provider. Some resource providers are registered by default. Other resource providers are registered automatically when you take certain actions. For example, when you create a resource through the portal, the resource provider is typically registered for you. For other scenarios, you may need to manually register a resource provider. For a list of resource providers registered by default, see [Resource providers for Azure services](azure-services-resource-providers.md).
+Before using a resource provider, your Azure subscription must be registered for the resource provider. Registration configures your subscription to work with the resource provider. Some resource providers are registered by default. For a list of resource providers registered by default, see [Resource providers for Azure services](azure-services-resource-providers.md).
+
+Other resource providers are registered automatically when you take certain actions. When you deploy an Azure Resource Manager template, all required resource providers are automatically registered. When you create a resource through the portal, the resource provider is typically registered for you. For other scenarios, you may need to manually register a resource provider.
This article shows you how to check the registration status of a resource provider, and register it as needed. You must have permission to do the `/register/action` operation for the resource provider. The permission is included in the Contributor and Owner roles.
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-github-actions.md
You can create a [service principal](../../active-directory/develop/app-objects-
Create a resource group if you do not already have one. ```azurecli-interactive
- az group create -n {MyResourceGroup}
+ az group create -n {MyResourceGroup} -l {location}
``` Replace the placeholder `myApp` with the name of your application.
azure-signalr Signalr Concept Messages And Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-concept-messages-and-connections.md
If you have three clients and one application server. One client sends a 4-KB me
There are server connections and client connections with Azure SignalR Service. By default, each application server starts with five initial connections per hub, and each client has one client connection.
-The connection count shown in the Azure portal includes both server connections and client connections.
- For example, assume that you have two application servers and you define five hubs in code. The server connection count will be 50: 2 app servers * 5 hubs * 5 connections per hub.
+The connection count shown in the Azure portal includes server connections, client connections, diagnostic connections, and live trace connections. The connection types are defined in the following list:
+
+- **Server connection**: Connects Azure SignalR Service and the app server.
+- **Client connection**: Connects Azure SignalR Service and the client app.
+- **Diagnostic connection**: A special kind of client connection that can produce a more detailed log, which might affect performance. This kind of client is designed for troubleshooting.
+- **Live trace connection**: Connects to the live trace endpoint and receives live traces of Azure SignalR Service.
+
+Note that a live trace connection isn't counted as a client connection or as a server connection.
+ ASP.NET SignalR calculates server connections in a different way. It includes one default hub in addition to hubs that you define. By default, each application server needs five more initial server connections. The initial connection count for the default hub stays consistent with other hubs. The service and the application server keep syncing connection status and making adjustment to server connections to get better performance and service stability. So you might see server connection number changes from time to time.
Message sent into the service is inbound message. Message sent out of the servic
- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr ) - [ASP.NET Core SignalR configuration](/aspnet/core/signalr/configuration) - [JSON](https://www.json.org/)-- [MessagePack](/aspnet/core/signalr/messagepackhubprotocol)
+- [MessagePack](/aspnet/core/signalr/messagepackhubprotocol)
azure-sql Connect Query Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-python.md
Last updated 12/19/2020 # Quickstart: Use Python to query a database+ [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi-asa.md)] In this quickstart, you use Python to connect to Azure SQL Database, Azure SQL Managed Instance, or Synapse SQL database and use T-SQL statements to query data.
To complete this quickstart, you need:
[!INCLUDE[create-configure-database](../includes/create-configure-database.md)] - [Python](https://python.org/downloads) 3 and related software
+
- # [macOS](#tab/macos)
-
- To install Homebrew and Python, the ODBC driver and SQLCMD, and the Python driver for SQL Server, use steps **1.2**, **1.3**, and **2.1** in [create Python apps using SQL Server on macOS](https://www.microsoft.com/sql-server/developer-get-started/python/mac/).
-
- For further information, see [Microsoft ODBC driver on macOS](/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server).
-
- # [Ubuntu](#tab/ubuntu)
-
- To install Python and other required packages, use `sudo apt-get install python python-pip gcc g++ build-essential`.
+ |**Action**|**macOS**|**Ubuntu**|**Windows**|
+ |-|--|||
+ |Install the ODBC driver, SQLCMD, and the Python driver for SQL Server|Use steps **1.2**, **1.3**, and **2.1** in [create Python apps using SQL Server on macOS](https://www.microsoft.com/sql-server/developer-get-started/python/mac/). This will also install install Homebrew and Python. |[Configure an environment for pyodbc Python development](/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development#linux)|[Configure an environment for pyodbc Python development](/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development#windows).|
+ |Install Python and other required packages| |Use `sudo apt-get install python python-pip gcc g++ build-essential`.| |
+ |Further information|[Microsoft ODBC driver on macOS](/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server) |[Microsoft ODBC driver on Linux](/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server)|[Microsoft ODBC driver on Linux](/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server)|
- To install the ODBC driver, SQLCMD, and the Python driver for SQL Server, see [configure an environment for pyodbc Python development](/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development#linux).
- For further information, see [Microsoft ODBC driver on Linux](/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server).
- # [Windows](#tab/windows)
-
- To install Python, the ODBC driver and SQLCMD, and the Python driver for SQL Server, see [configure an environment for pyodbc Python development](/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development#windows).
-
- For further information, see [Microsoft ODBC driver](/sql/connect/odbc/microsoft-odbc-driver-for-sql-server).
-- To further explore Python and the database in Azure SQL Database, see [Azure SQL Database libraries for Python](/python/api/overview/azure/sql), the [pyodbc repository](https://github.com/mkleehammer/pyodbc/wiki/), and a [pyodbc sample](https://github.com/mkleehammer/pyodbc/wiki/Getting-started). ## Create code to query your database
azure-vmware Azure Vmware Solution On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-on-premises.md
To establish on-premises connectivity to your Azure VMware Solution private clou
This tutorial results in a connection as shown in the diagram. ## Verify on-premises network connectivity
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
Last updated 03/11/2021
There are two ways to interconnectivity in the Azure VMware Solution private cloud:
-1. [**Basic Azure-only interconnectivity**](#azure-virtual-network-interconnectivity) lets you manage and use your private cloud with only a single virtual network in Azure. This implementation is best suited for Azure VMware Solution evaluations or implementations that don't require access from on-premises environments.
+- [**Basic Azure-only interconnectivity**](#azure-virtual-network-interconnectivity) lets you manage and use your private cloud with only a single virtual network in Azure. This implementation is best suited for Azure VMware Solution evaluations or implementations that don't require access from on-premises environments.
-1. [**Full on-premises to private cloud interconnectivity**](#on-premises-interconnectivity) extends the basic Azure-only implementation to include interconnectivity between on-premises and Azure VMware Solution private clouds.
+- [**Full on-premises to private cloud interconnectivity**](#on-premises-interconnectivity) extends the basic Azure-only implementation to include interconnectivity between on-premises and Azure VMware Solution private clouds.
In this article, we'll cover the key concepts that establish networking and interconnectivity, including requirements and limitations. This article provides you with the information you need to know to configure your networking to work with Azure VMware Solution.
azure-vmware Create Ipsec Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/create-ipsec-tunnel.md
To create the site-to-site VPN tunnel, you'll need to create a public-facing IP
| **Name** | | | **Type** | Select **Standard**, which will allow more than just the VPN gateway traffic. | -
- :::image type="content" source="media/create-ipsec-tunnel/create-wan.png" alt-text="Screenshot showing the Create WAN page in the Azure portal.":::
+ :::image type="content" source="media/create-ipsec-tunnel/create-wan.png" alt-text="Screenshot showing the Create WAN page in the Azure portal.":::
3. In the Azure portal, select the Virtual WAN you created in the previous step, select **Create virtual hub**, enter the required fields, and then select **Next: Site to site**.
To create the site-to-site VPN tunnel, you'll need to create a public-facing IP
| **Name** | | | **Hub private address space** | Enter the subnet using a `/24` (minimum). |
- :::image type="content" source="media/create-ipsec-tunnel/create-virtual-hub.png" alt-text="Screenshot showing the Create virtual hub page.":::
+ :::image type="content" source="media/create-ipsec-tunnel/create-virtual-hub.png" alt-text="Screenshot showing the Create virtual hub page.":::
4. On the **Site-to-site** tab, define the site-to-site gateway by setting the aggregate throughput from the **Gateway scale units** drop-down.
To create the site-to-site VPN tunnel, you'll need to create a public-facing IP
2. In the **Overview** of the virtual hub, select **Connectivity** > **VPN (Site-to-site)**, and then select **Create new VPN site**. -
- :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics.png" alt-text="Screenshot of the Overview page for the virtual hub, with VPN (site-to-site) and Create new VPN site selected.":::
+ :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics.png" alt-text="Screenshot of the Overview page for the virtual hub, with VPN (site-to-site) and Create new VPN site selected.":::
3. On the **Basics** tab, enter the required fields and then select **Next : Links**.
To create the site-to-site VPN tunnel, you'll need to create a public-facing IP
This section applies only to policy-based VPNs. Policy-based (or static, route-based) VPN setups are driven by on-premise VPN device capabilities in most cases. They require on-premise and Azure VMware Solution networks to be specified. For Azure VMware Solution with an Azure Virtual WAN hub, you can't select *any* network. Instead, you have to specify all relevant on-premise and Azure VMware Solution Virtual WAN hub ranges. These hub ranges are used to specify the encryption domain of the policy base VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
-1. In the Azure portal, go to your Virtual WAN hub site; under **Connectivity**, select **VPN (Site to site)**.
+1. In the Azure portal, go to your Virtual WAN hub site. Under **Connectivity**, select **VPN (Site to site)**.
-2. Select your VPN site name and then the ellipsis (...) at the far right; then select **edit VPN connection to this hub**.
+2. Select your VPN site name, the ellipsis (...) at the far right, and then **edit VPN connection to this hub**.
- :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png" alt-text="Screenshot of the page in Azure for the Virtual WAN hub site showing an ellipsis selected to access Edit VPN connection to this hub." lightbox="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png":::
+ :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png" alt-text="Screenshot of the page in Azure for the Virtual WAN hub site showing an ellipsis selected to access Edit VPN connection to this hub." lightbox="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png":::
3. Edit the connection between the VPN site and the hub, and then select **Save**. - Internet Protocol Security (IPSec), select **Custom**. - Use policy-based traffic selector, select **Enable** - Specify the details for **IKE Phase 1** and **IKE Phase 2(ipsec)**.
- :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-connection.png" alt-text="Screenshot of Edit VPN connection page.":::
+ :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-connection.png" alt-text="Screenshot of Edit VPN connection page.":::
- Your traffic selectors or subnets that are part of the policy-based encryption domain should be:
+ Your traffic selectors or subnets that are part of the policy-based encryption domain should be:
- - The virtual WAN hub /24
- - The Azure VMware Solution private cloud /22
- - The connected Azure virtual network (if present)
+ - The virtual WAN hub /24
+ - The Azure VMware Solution private cloud /22
+ - The connected Azure virtual network (if present)
## Connect your VPN site to the hub
-1. Check the box next to your VPN site name (see preceding **VPN Site to site** screenshot) and then select **Connect VPN sites**. In the **Pre-shared key** field, enter the key previously defined for the on-premise endpoint. If you don't have a previously defined key, you can leave this field blank and a key will be automatically generated for you.
-
- Only enable **Propagate Default Route** if you're deploying a firewall in the hub and it is the next hop for connections through that tunnel.
-
- Select **Connect**. A connection status screen will show the status of the tunnel creation.
+1. Select your VPN site name and then select **Connect VPN sites**.
+1. In the **Pre-shared key** field, enter the key previously defined for the on-premise endpoint.
-2. Go to the Virtual WAN overview. Open the VPN site page and download the VPN configuration file to apply it to the on-premises endpoint.
+ >[!TIP]
+ >If you don't have a previously defined key, you can leave this field blank. A key is generated for you automatically.
+
+ >[!IMPORTANT]
+ >Only enable **Propagate Default Route** if you're deploying a firewall in the hub and it is the next hop for connections through that tunnel.
-3. Now we'll patch the Azure VMware Solution ExpressRoute into the Virtual WAN hub. (This step requires first creating your private cloud.)
+1. Select **Connect**. A connection status screen shows the status of the tunnel creation.
- Go to the **Connectivity** section of Azure VMware Solution private cloud. On the **ExpressRoute** tab, select **+ Request an authorization key**. Name it and select **Create**. (It may take about 30 seconds to create the key.) Copy the ExpressRoute ID and the authorization key.
+2. Go to the Virtual WAN overview and open the VPN site page to download the VPN configuration file for the on-premises endpoint.
- :::image type="content" source="media/create-ipsec-tunnel/express-route-connectivity.png" alt-text="Screenshot of the Connectivity page for the private cloud, with Request an authorization key selected under the ExpressRoute tab.":::
+3. Patch the Azure VMware Solution ExpressRoute in the Virtual WAN hub. This step requires first creating your private cloud.
- > [!NOTE]
- > The authorization key will disappear after some time, so copy it as soon as it appears.
+ [!INCLUDE [request-authorization-key](includes/request-authorization-key.md)]
-4. Next, we'll link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub. In the Azure portal, open the Virtual WAN you created earlier. Select the created Virtual WAN hub and then select **ExpressRoute** in the left pane. Select **+ Redeem authorization key**.
+4. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub.
+ 1. In the Azure portal, open the Virtual WAN you created earlier.
+ 1. Select the created Virtual WAN hub and then select **ExpressRoute** in the left pane.
+ 1. Select **+ Redeem authorization key**.
- :::image type="content" source="media/create-ipsec-tunnel/redeem-authorization-key.png" alt-text="Screenshot of the ExpressRoute page for the private cloud, with Redeem authorization key selected.":::
+ :::image type="content" source="media/create-ipsec-tunnel/redeem-authorization-key.png" alt-text="Screenshot of the ExpressRoute page for the private cloud, with Redeem authorization key selected.":::
- Paste the authorization key into the Authorization key field and the ExpressRoute ID into the **Peer circuit URI** field. Make sure to select **Automatically associate this ExpressRoute circuit with the hub.** Select **Add** to establish the link.
+ 1. Paste the authorization key into the Authorization key field.
+ 1. Past the ExpressRoute ID into the **Peer circuit URI** field.
+ 1. Select **Automatically associate this ExpressRoute circuit with the hub.**
+ 1. Select **Add** to establish the link.
-5. To test your connection, [Create an NSX-T segment](./tutorial-nsx-t-network-segment.md) and provision a VM on the network. Test by pinging both the on-premise and Azure VMware Solution endpoints.
+5. Test your connection by [creating an NSX-T segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premise and Azure VMware Solution endpoints.
azure-vmware Move Ea Csp Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/move-ea-csp-subscriptions.md
+
+ Title: Move EA and CSP Azure VMware Solution subscriptions
+description: Learn how to move the private cloud from one subscription to another. The movement can be made for various reasons such as billing.
+ Last updated : 03/15/2021++
+# Move EA and CSP Azure VMware Solution subscriptions
+
+In this article, you'll learn how to move the private cloud from one subscription to another. The movement can be made for various reasons such as billing.
+
+>[!IMPORTANT]
+>You should have at least contributor rights on both source and target subscriptions. VNet and VNet gateway cannot be moved from one subscription to another. Additionally, moving your subscriptions has no impact on the management and workloads, like the vCenter, NSX, and workload virtual machines.
+
+1. Sign into the Azure portal and select the private cloud you want to move.
+
+1. Select the **Subscription (change)** link.
+
+ :::image type="content" source="media/private-cloud-overview-subscription-id.png" alt-text="Screenshot showing the private cloud details.":::
+
+1. Provide the subscription details for **Target** and select **Next**.
+
+ :::image type="content" source="media/move-resources-subscription-target.png" alt-text="Screenshot of the target resource." lightbox="media/move-resources-subscription-target.png":::
+
+1. Confirm the validation of the resources you selected to move and select **Next**.
+
+ :::image type="content" source="media/confirm-move-resources-subscription-target.png" alt-text="Screenshot showing the resource being moved." lightbox="media/confirm-move-resources-subscription-target.png":::
+
+1. Select the check box indicating you understand that the tools and scripts associated will not work until you update them to use the new resource IDs. Then select **Move**.
+
+ :::image type="content" source="media/review-move-resources-subscription-target.png" alt-text="Screenshot showing the summary of the selected resource being moved. " lightbox="media/review-move-resources-subscription-target.png":::
+
+ A notification appears once the resource move is complete. The new subscription appears in the private cloud Overview.
+
+ :::image type="content" source="media/moved-subscription-target.png" alt-text="Screenshot showing a new subscription." lightbox="media/moved-subscription-target.png":::
+
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-access-private-cloud.md
In this tutorial, you learn how to:
The URLs and user credentials for private cloud vCenter and NSX-T Manager display.
- >[!TIP]
- >Select **Generate a new password** to generate new vCenter and NSX-T passwords.
-
- :::image type="content" source="media/tutorial-access-private-cloud/generate-vcenter-nsxt-passwords.png" alt-text="Display private cloud vCenter and NSX Manager URLs and credentials." border="true" lightbox="media/tutorial-access-private-cloud/generate-vcenter-nsxt-passwords.png":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Display private cloud vCenter and NSX Manager URLs and credentials." border="true" lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
1. Navigate to the VM you created in the preceding step and connect to the virtual machine.
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
To sign in to vCenter and NSX manager, you'll need the URLs to the vCenter web c
Navigate to your Azure VMware Solution private cloud, under **Manage**, select **Identity**, here you'll find the information needed. ## Next steps
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Title: Tutorial - Peer on-premises environments to a private cloud description: Learn how to create ExpressRoute Global Reach peering to a private cloud in an Azure VMware Solution. Previously updated : 01/27/2021 Last updated : 03/17/2021 # Tutorial: Peer on-premises environments to a private cloud ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments.
+The ExpressRoute circuit you use when you [configure networking for your VMware private cloud in Azure](tutorial-configure-networking.md) requires you to create and use authorization keys. You'll have already used one authorization key from the ExpressRoute circuit, and in this tutorial, you'll create a second authorization key to peer with your on-premises ExpressRoute circuit.
+ In this tutorial, you learn how to: > [!div class="checklist"]
-> * Use the Azure portal to enable on-premises-to-private cloud ExpressRoute Global Reach peering.
+> * Create a second authorization key for _circuit 2_, the private cloud ExpressRoute circuit.
+> * Use either the Azure portal or the Azure CLI in a Cloud Shell method in the subscription of _circuit 1_ to enable on-premises-to-private cloud ExpressRoute Global Reach peering.
## Before you begin Before you enable connectivity between two ExpressRoute circuits using ExpressRoute Global Reach, review the documentation on how to [enable connectivity in different Azure subscriptions](../expressroute/expressroute-howto-set-global-reach-cli.md#enable-connectivity-between-expressroute-circuits-in-different-azure-subscriptions). - ## Prerequisites -- A separate, functioning ExpressRoute circuit used to connect on-premises environments to Azure.
+- Established connectivity to and from an Azure VMware Solution private cloud with its ExpressRoute circuit peered with an ExpressRoute gateway in an Azure virtual network (VNet) ΓÇô which is circuit 2 from peering procedures.
+- A separate, functioning ExpressRoute circuit used to connect on-premises environments to Azure ΓÇô which is circuit 1 from the peering procedures' perspective.
+- A /29 non-overlapping [network address block](../expressroute/expressroute-routing.md#ip-addresses-used-for-peerings) for the ExpressRoute Global Reach peering.
- Ensure that all gateways, including the ExpressRoute provider's service, support 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
+>[!IMPORTANT]
+>In the context of these prerequisites, your on-premises ExpressRoute circuit is _circuit 1_, and your private cloud ExpressRoute circuit is in a different subscription and labeled _circuit 2_.
-## Create an ExpressRoute authorization key in the on-premises ExpressRoute circuit
-
-1. From the **ExpressRoute circuits** blade, under Settings, select **Authorizations**.
+## Create an ExpressRoute authorization key in the on-premises circuit
-2. Enter the name for the authorization key and select **Save**.
-
- :::image type="content" source="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png" alt-text="Select Authorizations and enter the name for the authorization key.":::
-
- Once created, the new key appears in the list of authorization keys for the circuit.
-
- 4. Make a note of the authorization key and the ExpressRoute ID. You'll use them in the next step to complete the peering.
+## Peer private cloud to on-premises with authorization key
+Now that you've created an authorization key for the private cloud ExpressRoute circuit, you can peer it with your on-premises ExpressRoute circuit. The peering is done from the perspective of the on-premises ExpressRoute circuit in either the **Azure portal** or using the **Azure CLI in a Cloud Shell**. With both methods, you use the resource ID and authorization key of your private cloud ExpressRoute circuit to finish the peering.
+
+### [Portal](#tab/azure-portal)
- ## Peer private cloud to on-premises
+1. Sign in to the [Azure portal](https://portal.azure.com) using the same subscription as the on-premises ExpressRoute circuit.
-1. From the private cloud **Overview**, under Manage, select **Connectivity > ExpressRoute Global Reach > Add**.
+1. From the private cloud **Overview**, under Manage, select **Connectivity** > **ExpressRoute Global Reach** > **Add**.
- :::image type="content" source="./media/expressroute-global-reach/expressroute-global-reach-tab.png" alt-text="From the menu, select Connectivity, the ExpressRoute Global Reach tab, and then Add.":::
+ :::image type="content" source="./media/expressroute-global-reach/expressroute-global-reach-tab.png" alt-text="Screenshot showing the ExpressRoute Global Reach tab in the Azure VMware Solution private cloud.":::
-2. Enter the ExpressRoute ID and the authorization key created in the previous section.
+1. Create an on-premises cloud connection. Do one of the following and then select **Create**:
- :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Enter the ExpressRoute ID and the authorization key, and then select Create.":::
+ - Select the **ExpressRoute circuit** from the list, or
+ - If you have the circuit ID, paste it in the field and and provide the authorization key.
-3. Select **Create**. The new connection shows in the On-premises cloud connections list.
+ :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Enter the ExpressRoute ID and the authorization key, and then select Create.":::
+
+ The new connection shows in the On-premises cloud connections list.
>[!TIP] >You can delete or disconnect a connection from the list by selecting **More**. > > :::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Disconnect or deleted an on-premises connection":::
+### [Azure CLI](#tab/azure-cli)
+
+We've augmented the [CLI commands](../expressroute/expressroute-howto-set-global-reach-cli.md) with specific details and examples to help you configure the ExpressRoute Global Reach peering between on-premises environments to an Azure VMware Solution private cloud.
+
+>[!TIP]
+>For brevity in the Azure CLI command output, these instructions may use a [`ΓÇôquery` argument](https://docs.microsoft.com/cli/azure/query-azure-cli) to execute a JMESPath query to only show the required results.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using the same subscription as the on-premises ExpressRoute circuit.
+
+1. Open a Cloud Shell and leave the shell as bash.
+
+ :::image type="content" source="media/expressroute-global-reach/azure-portal-cloud-shell.png" alt-text="Screenshot showing the Azure portal Cloud Shell.":::
+
+1. Create the peering against circuit 1, passing in circuit 2's resource ID and authorization key.
+
+ ```azurecli-interactive
+ az network express-route peering connection create -g <ResourceGroupName> --circuit-name <Circuit1Name> --peering-name AzurePrivatePeering -n <ConnectionName> --peer-circuit <Circuit2ResourceID> --address-prefix <__.__.__.__/29> --authorization-key <authorizationKey>
+ ```
+
+ :::image type="content" source="media/expressroute-global-reach/azure-command-with-results.png" alt-text="Screenshot showing the command and the results of a successful peering between circuit 1 and circuit 2.":::
+
+You can connect from on-premises environments to your private cloud over the ExpressRoute Global Reach peering.
+
+>[!TIP]
+>You can delete the peering by following the [Disable connectivity between your on-premises networks](../expressroute/expressroute-howto-set-global-reach-cli.md#disable-connectivity-between-your-on-premises-networks) instructions.
+++ ## Next steps
backup Backup Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-architecture.md
You don't need to explicitly allow internet connectivity to back up your Azure V
1. The MARS agent uses VSS to take a point-in-time snapshot of the volumes selected for backup. - The MARS agent uses only the Windows system write operation to capture the snapshot. - Because the agent doesn't use any application VSS writers, it doesn't capture app-consistent snapshots.
-1. After taking the snapshot with VSS, the MARS agent creates a virtual hard disk (VHD) in the cache folder you specified when you configured the backup. The agent also stores checksums for each data block.
+1. After taking the snapshot with VSS, the MARS agent creates a virtual hard disk (VHD) in the cache folder you specified when you configured the backup. The agent also stores checksums for each data block. These are later used to detect changed blocks for subsequent incremental backups.
1. Incremental backups run according to the schedule you specify, unless you run an on-demand backup. 1. In incremental backups, changed files are identified and a new VHD is created. The VHD is compressed and encrypted, and then it's sent to the vault. 1. After the incremental backup finishes, the new VHD is merged with the VHD created after the initial replication. This merged VHD provides the latest state to be used for comparison for ongoing backup.
backup Backup Azure Backup Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-backup-faq.md
Exporting data directly from the Recovery Services vault to on-premises using Da
In the case of a [GRS](azure-backup-glossary.md#grs) vault without [CRR](azure-backup-glossary.md#cross-region-restore-crr) capability enabled, the data in the secondary region can't be accessed until Azure declares a disaster in the primary region. In such a scenario, the restore happens from the secondary region. When CRR is enabled, even if the primary region is up and running, you can trigger a restore in the secondary region.
+### Can I move a subscription that contains a vault to a different Azure Active Directory?
+
+Yes. To move a subscription (that contains a vault) to a different Azure Active Directory (AD), see [Transfer subscription to a different directory](../role-based-access-control/transfer-subscription.md).
+
+>[!IMPORTANT]
+>Ensure that you perform the following actions after moving the subscription:<ul><li>Role-based access control permissions and custom roles are not transferrable. You must recreate the permissions and roles in the new Azure AD.</li><li>You must recreate the Managed Identity (MI) of the vault by disabling and enabling it again. Also, you must evaluate and recreate the MI permissions.</li><li>If the vault uses features which leverage MI, such as [Private Endpoints](private-endpoints.md#before-you-start) and [Customer Managed Keys](encryption-at-rest-with-cmk.md#before-you-start), you must reconfigure the features.</li></ul>
+ ## Azure Backup agent ### Where can I find common questions about the Azure Backup agent for Azure VM backup?
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-mars-troubleshoot.md
We recommend that you check the following before you start troubleshooting Micro
| Cause | Recommended actions | | | |
-| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 48 hours before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.md#where-can-i-download-the-vault-credentials-file) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <ul><li> If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br/> <li> If the new installation fails, try reinstalling with the new credentials.</ul> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 48 hours. We recommend that you download a new vault credential file.
+| **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than *.vaultCredentials*. (For example, they might have been downloaded more than 10 days before the time of registration.)| [Download new credentials](backup-azure-file-folder-backup-faq.md#where-can-i-download-the-vault-credentials-file) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <ul><li> If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br/> <li> If the new installation fails, try reinstalling with the new credentials.</ul> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file.
| **Proxy server/firewall is blocking registration** <br/>or <br/>**No internet connectivity** <br/><br/> If your machine or proxy server has limited internet connectivity and you don't ensure access for the necessary URLs, the registration will fail.| Take these steps:<br/> <ul><li> Work with your IT team to ensure the system has internet connectivity.<li> If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<li> If you do have a firewall/proxy server, work with your networking team to ensure these URLs and IP addresses have access:<br/> <br> **URLs**<br> `www.msftncsi.com` <br> .Microsoft.com <br> .WindowsAzure.com <br> .microsoftonline.com <br> .windows.net <br>`www.msftconnecttest.com`<br><br>**IP addresses**<br> 20.190.128.0/18 <br> 40.126.0.0/18<br> <br/></ul></ul>Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in [Azure ExpressRoute support](backup-support-matrix-mars-agent.md#azure-expressroute-support). | **Antivirus software is blocking registration** | If you have antivirus software installed on the server, add necessary exclusion rules to the antivirus scan for these files and folders: <br/><ul> <li> CBengine.exe <li> CSC.exe<li> The scratch folder. Its default location is C:\Program Files\Microsoft Azure Recovery Services Agent\Scratch. <li> The bin folder at C:\Program Files\Microsoft Azure Recovery Services Agent\Bin.
backup Backup Azure Move Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-move-recovery-services-vault.md
All public regions and sovereign regions are supported, except France Central, F
- During vault move across resource groups, both the source and target resource groups are locked preventing the write and delete operations. For more information, see this [article](../azure-resource-manager/management/move-resource-group-and-subscription.md). - Only admin subscription has the permissions to move a vault.-- For moving vaults across subscriptions, the target subscription must reside in the same tenant as the source subscription and its state should be enabled.
+- For moving vaults across subscriptions, the target subscription must reside in the same tenant as the source subscription and its state must be enabled. To move a vault to a different Azure AD directory, see [Transfer subscription to a different directory](../role-based-access-control/transfer-subscription.md) and [Recovery Service vault FAQs](backup-azure-backup-faq.md#recovery-services-vault).
- You must have permission to perform write operations on the target resource group. - Moving the vault only changes the resource group. The Recovery Services vault will reside on the same location and it can't be changed. - You can move only one Recovery Services vault, per region, at a time.
To move a Recovery Services vault and its associated resources to different reso
![Move Subscription](./media/backup-azure-move-recovery-services/move-resource.png)
-5. To add the target resource group, in the **Resource group** drop-down list select an existing resource group or select **create a new group** option.
+5. To add the target resource group, in the **Resource group** drop-down list, select an existing resource group or select **create a new group** option.
![Create Resource](./media/backup-azure-move-recovery-services/create-a-new-resource.png)
You can move a Recovery Services vault and its associated resources to a differe
![move resource](./media/backup-azure-move-recovery-services/move-resource-source-subscription.png) 5. Select the target subscription from the **Subscription** drop-down list, where you want the vault to be moved.
-6. To add the target resource group, in the **Resource group** drop-down list select an existing resource group or select **create a new group** option.
+6. To add the target resource group, in the **Resource group** drop-down list, select an existing resource group or select **create a new group** option.
![Add Subscription](./media/backup-azure-move-recovery-services/add-subscription.png)
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-restore-files-from-vm.md
To restore files or folders from the recovery point, go to the virtual machine a
## Step 2: Ensure the machine meets the requirements before executing the script
-After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. If it does, then choose an alternate machine preferably from the same region that meets the requirements.
+After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine preferably from the same region that meets the requirements**.
### Dynamic disks
-You can't run the executable script on the VM with any of the following characteristics:
+You can't run the executable script on the VM with any of the following characteristics: Choose an alternate machine
- Volumes that span multiple disks (spanned and striped volumes). - Fault-tolerant volumes (mirrored and RAID-5 volumes) on dynamic disks. ### Windows Storage Spaces
-You cannot run the downloaded executable on the VM that is configured for Windows Storage Spaces.
+You cannot run the downloaded executable on the same backed-up VM if the backed up VM has Windows Storage Spaces. Choose an alternate machine.
### Virtual machine backups having large disks
batch Batch Rendering Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-applications.md
Title: Rendering applications description: It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed. Previously updated : 02/12/2021 Last updated : 03/12/2021
It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed.
-Where applicable, pay-per-use licensing is available for the pre-installed rendering applications. When a Batch pool is created, the required applications can be specified and both the cost of VM and applications will be billed per minute. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering).
+Where applicable, pay-for-use licensing is available for the pre-installed rendering applications. When a Batch pool is created, the required applications can be specified and both the cost of VM and applications will be billed per minute. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering).
Some applications only support Windows, but most are supported on both Windows and Linux.
+> [!IMPORTANT]
+> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on 29 February 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
+ ## Applications on latest CentOS 7 rendering image The following list applies to the CentOS rendering image, version 1.2.0.
batch Batch Rendering Functionality https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-functionality.md
Title: Rendering capabilities
description: Standard Azure Batch capabilities are used to run rendering workloads and apps. Batch includes specific features to support rendering workloads. Previously updated : 02/01/2021 Last updated : 03/12/2021
Most rendering applications will require licenses obtained from a license server
## Batch pools using rendering VM images
+> [!IMPORTANT]
+> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on 29 February 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
+ ### Rendering application installation An Azure Marketplace rendering VM image can be specified in the pool configuration if only the pre-installed applications need to be used.
batch Batch Rendering Using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-rendering-using.md
Title: Using rendering capabilities
description: How to use Azure Batch rendering capabilities. Try using the Batch Explorer application, either directly or invoked from a client application plug-in. Previously updated : 03/05/2020 Last updated : 03/12/2020 # Using Azure Batch rendering
+> [!IMPORTANT]
+> The rendering VM images and pay-for-use licensing have been [deprecated and will be retired on 29 February 2024](https://azure.microsoft.com/updates/azure-batch-rendering-vm-images-licensing-will-be-retired-on-29-february-2024/). To use Batch for rendering, [a custom VM image and standard application licensing should be used.](batch-rendering-functionality.md#batch-pools-using-custom-vm-images-and-standard-application-licensing)
+ There are several ways to use Azure Batch rendering: * APIs:
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
This article discusses a collection of best practices and useful tips for using
- **Pool allocation mode** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable an important, but small subset of scenarios. You can read more about user subscription mode at [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **'virtualMachineConfiguration' or 'virtualMachineConfiguration'.**
- While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'virtualMachineConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+- **'virtualMachineConfiguration' or 'cloudServiceConfiguration'.**
+ While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
- **Consider job and task run time when determining job to pool mapping.** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
batch Tutorial R Doazureparallel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-r-doazureparallel.md
ms.devlang: r
Last updated 10/08/2020 + # Tutorial: Run a parallel R simulation with Azure Batch
batch Tutorial Rendering Batchexplorer Blender https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-rendering-batchexplorer-blender.md
Last updated 08/02/2018 + # Tutorial: Render a Blender scene using Batch Explorer
batch Tutorial Rendering Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-rendering-cli.md
description: Learn how to render an Autodesk 3ds Max scene with Arnold using the
Last updated 12/30/2020 + # Tutorial: Render a scene with Azure Batch
cdn Cdn Pop Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-pop-locations.md
This article lists current Metros containing point-of-presence (POP) locations,
## Next steps
-* To get the latest IP addresses for allowlisting, see the [Azure CDN Edge Nodes API](https://github.com/Azure/azure-docs-rest-apis/blob/master/docs-ref-autogen/cdn/cdn/EdgeNodes/).
+* To get the latest IP addresses for allow listing, see the [Azure CDN Edge Nodes API](https://docs.microsoft.com/rest/api/cdn/edgenodes).
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Remove old remote desktop settings from the Service Configuration (.cscfg) file.
<Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2021-12-17T23:59:59.0000000+05:30" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> ```
+Remove old diagnostics settings for each role in the Service Configuration (.cscfg) file.
+
+```xml
+<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
+```
## Required Service Definition file (.csdef) updates
Deployments that utilized the old remote desktop plugins need to have the module
<Import moduleName="RemoteForwarder" /> </Imports> ```
+Deployments that utilized the old diagnostics plugins need the settings removed for each role from the Service Definition (.csdef) file
+
+```xml
+<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" />
+```
## Key Vault creation
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
## Deploy a Cloud Service (extended support)
+> [!NOTE]
+ An alternative way of deploying your cloud service (extended support) is via [Azure portal](https://portal.azure.com). You can download the generated ARM template via the portal for your future deployments
+
1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template. ```json
This tutorial explains how to create a Cloud Service (extended support) deployme
- Review [frequently asked questions](faq.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Generate Template Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/generate-template-portal.md
+
+ Title: Generate ARM Template for Cloud Services (extended support) using the Azure portal
+description: Generate and download ARM Template and parameter file for Cloud Services (extended support) using the Azure portal
+++++ Last updated : 03/07/2021+++
+# Generate ARM Template for Cloud Services (extended support) using the Azure portal
+
+This article explains how to get the ARM template and parameter file from the [Azure portal](https://portal.azure.com) after the cloud service (extended support) is deployed. The ARM template and parameter file can be used in future deployments to upgrade or update a cloud service (extended support)
+
+## Get ARM template via portal
+
+ 1. Go to your resource group, and select deployments.
+ :::image type="content" source="media/generate-template-portal-1.png" alt-text="Image shows selecting deployments under resource group on the Azure portal.":::
+
+ 2. Select your cloud service (extended support) and click on template.
+ :::image type="content" source="media/generate-template-portal-2.png" alt-text="Image shows selecting template under cloud service (extended support) on the Azure portal.":::
+
+ 3. Download your template and parameter files. These can be used for future deployments via PowerShell.
+ :::image type="content" source="media/generate-template-portal-3.png" alt-text="Image shows downloading template file on the Azure portal.":::
+
+## Next steps
+- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md)
+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview.md
Azure's Computer Vision service gives you access to advanced algorithms that pro
You can create Computer Vision applications through a [client library SDK](./quickstarts-sdk/client-library.md) or by calling the [REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-ga/operations/5d986960601faab4bf452005) directly. This page broadly covers what you can do with Computer Vision.
+This documentation contains the following types of articles:
+* The [quickstarts](./quickstarts-sdk/client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
+* The [how-to guides](./Vision-API-How-to-Topics/HowToCallVisionAPI.md) contain instructions for using the service in more specific or customized ways.
+* The [conceptual articles](concept-recognizing-text.md) provide in-depth explanations of the service's functionality and features.
+* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
+ ## Optical Character Recognition (OCR) Computer Vision includes [Optical Character Recognition (OCR)](concept-recognizing-text.md) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents. It uses deep learning based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow a [quickstart](./quickstarts-sdk/client-library.md) to get started.
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
You can add IPs to App service allow list to restrict access or Configure App Se
#### Add IPs to App Service allow list
-1.
-traffic only from Cognitive Services IPs. These are already included in Service Tag `CognitiveServicesManagement`. This is required for Authoring APIs (Create/Update KB) to invoke the app service and update Azure Search service accordingly. Check out [more information about service tags.](../../../virtual-network/service-tags-overview.md)
+1. Allow traffic only from Cognitive Services IPs. These are already included in Service Tag `CognitiveServicesManagement`. This is required for Authoring APIs (Create/Update KB) to invoke the app service and update Azure Search service accordingly. Check out [more information about service tags.](../../../virtual-network/service-tags-overview.md)
2. Make sure you also allow other entry points like Azure Bot Service, QnA Maker portal, etc. for prediction "GenerateAnswer" API access. 3. Please follow these steps to add the IP Address ranges to an allow list:
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
To create your first project, select **Speech-to-text/Custom speech**, and then
> [!IMPORTANT] > The [Speech Studio](https://aka.ms/custom-speech) formerly known as "Custom Speech portal" was recently updated! If you created previous data, models, tests, and published endpoints in the CRIS.ai portal or with APIs, you need to create a new project in the new portal to connect to these old entities.
-## Model lifecycle
+## Model and Endpoint lifecycle
-Custom Speech uses both *base models* and *custom models*. Each language has one or more base models. Generally, when a new speech model is released to the regular speech service, it's also imported to the Custom Speech service as a new base model. They're updated every 3 to 6 months. Older models typically become less useful over time because the newest model usually has higher accuracy.
-
-In contrast, custom models are created by adapting a chosen base model to a particular customer scenario. You can keep using a particular custom model for a long time after you have one that meets your needs. But we recommend that you periodically update to the latest base model and retrain it over time with additional data.
-
-Other key terms related to the model lifecycle include:
-
-* **Adaptation**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data.
-* **Decoding**: Using a model and performing speech recognition (decoding audio into text).
-* **Endpoint**: A user-specific deployment of either a base model or a custom model that's accessible *only* to a given user.
-
-### Expiration timeline
-
-As new models and new functionality become available and older, less accurate models are retired, see the following timelines for model and endpoint expiration:
-
-**Base models**
-
-* Adaptation: Available for one year. After the model is imported, it's available for one year to create custom models. After one year, new custom models must be created from a newer base model version.
-* Decoding: Available for two years after import. So you can create an endpoint and use batch transcription for two years with this model.
-* Endpoints: Available on the same timeline as decoding.
-
-**Custom models**
-
-* Decoding: Available for two years after the model is created. So you can use the custom model for two years (batch/realtime/testing) after it's created. After two years, *you should retrain your model* because the base model will usually have been deprecated for adaptation.
-* Endpoints: Available on the same timeline as decoding.
-
-When either a base model or custom model expires, it will always fall back to the *newest base model version*. So your implementation will never break, but it might become less accurate for *your specific data* if custom models reach expiration. You can see the expiration for a model in the following places in the Custom Speech area of the Speech Studio:
-
-* Model training summary
-* Model training detail
-* Deployment summary
-* Deployment detail
-
-You can also check the expiration dates via the [`GetModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) and [`GetBaseModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) custom speech APIs under the `deprecationDates` property in the JSON response.
-
-Note that you can upgrade the model on a custom speech endpoint without downtime by changing the model used by the endpoint in the deployment section of the Speech Studio, or via the custom speech API.
+Older models typically become less useful over time because the newest model usually has higher accuracy. Therefore, base models as well as custom models and endpoints created through the portal are subject to expiration after 1 year for adaptation and 2 years for decoding. See a detailed description in the [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md) article.
## Next steps
cognitive-services Faq Stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
The other results are likely worse and might not have full capitalization and pu
**Q: Why are there different base models?**
-**A**: You can choose from more than one base model in the Speech service. Each model name contains the date when it was added. When you start training a custom model, use the latest model to get the best accuracy. Older base models are still available for some time when a new model is made available. You can continue using the model that you have worked with until it is retired (see [Model lifecycle](custom-speech-overview.md#model-lifecycle)). It is still recommended to switch to the latest base model for better accuracy.
+**A**: You can choose from more than one base model in the Speech service. Each model name contains the date when it was added. When you start training a custom model, use the latest model to get the best accuracy. Older base models are still available for some time when a new model is made available. You can continue using the model that you have worked with until it is retired (see [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). It is still recommended to switch to the latest base model for better accuracy.
**Q: Can I update my existing model (model stacking)?**
The old dataset and the new dataset must be combined in a single .zip file (for
If you have adapted and deployed a model, that deployment will remain as is. You can decommission the deployed model, readapt using the newer version of the base model and redeploy for better accuracy.
-Both base models and custom models will be retired after some time (see [Model lifecycle](custom-speech-overview.md#model-lifecycle)).
+Both base models and custom models will be retired after some time (see [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)).
**Q: Can I download my model and run it locally?**
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
+
+ Title: Model and Endpoint Lifecycle of Custom Speech - Speech service
+
+description: Custom Speech provides base models for adaptation and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
++++++ Last updated : 03/10/2021+++
+# Model and Endpoint lifecycle
+
+Custom Speech uses both *base models* and *custom models*. Each language has one or more base models. Generally, when a new speech model is released to the regular speech service, it's also imported to the Custom Speech service as a new base model. They're updated every 6 to 12 months. Older models typically become less useful over time because the newest model usually has higher accuracy.
+
+In contrast, custom models are created by adapting a chosen base model with data from your particular customer scenario. You can keep using a particular custom model for a long time after you have one that meets your needs. But we recommend that you periodically update to the latest base model and retrain it over time with additional data.
+
+Other key terms related to the model lifecycle include:
+
+* **Adaptation**: Taking a base model and customizing it to your domain/scenario by using text data and/or audio data.
+* **Decoding**: Using a model and performing speech recognition (decoding audio into text).
+* **Endpoint**: A user-specific deployment of either a base model or a custom model that's accessible *only* to a given user.
+
+### Expiration timeline
+
+As new models and new functionality become available and older, less accurate models are retired, see the following timelines for model and endpoint expiration:
+
+**Base models**
+
+* Adaptation: Available for one year. After the model is imported, it's available for one year to create custom models. After one year, new custom models must be created from a newer base model version.
+* Decoding: Available for two years after import. So you can create an endpoint and use batch transcription for two years with this model.
+* Endpoints: Available on the same timeline as decoding.
+
+**Custom models**
+
+* Decoding: Available for two years after the model is created. So you can use the custom model for two years (batch/realtime/testing) after it's created. After two years, *you should retrain your model* because the base model will usually have been deprecated for adaptation.
+* Endpoints: Available on the same timeline as decoding.
+
+When either a base model or custom model expires, it will always fall back to the *newest base model version*. So your implementation will never break, but it might become less accurate for *your specific data* if custom models reach expiration. You can see the expiration for a model in the following places in the Custom Speech area of the Speech Studio:
+
+* Model training summary
+* Model training detail
+* Deployment summary
+* Deployment detail
+
+Here is an example form the model training summary:
+
+![Model training summary](media/custom-speech/custom-speech-model-training-with-expiry.png)
+And also from the model training detail page:
+
+![Model training detail](media/custom-speech/custom-speech-model-details-with-expiry.png)
+
+You can also check the expiration dates via the [`GetModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetModel) and [`GetBaseModel`](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModel) custom speech APIs under the `deprecationDates` property in the JSON response.
+
+Here is an example of the expiration data from the GetModel API call. The "DEPRECATIONDATES" show the :
+```json
+{
+ "SELF": "HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/{id}",
+ "BASEMODEL": {
+ "SELF": HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/BASE/{id}
+ },
+ "DATASETS": [
+ {
+ "SELF": https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/datasets/{id}
+ }
+ ],
+ "LINKS": {
+ "MANIFEST": "HTTPS://WESTUS2.API.COGNITIVE.MICROSOFT.COM/SPEECHTOTEXT/V3.0/MODELS/{id}/MANIFEST",
+ "COPYTO": https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/{id}/copyto
+ },
+ "PROJECT": {
+ "SELF": https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/{id}
+ },
+ "PROPERTIES": {
+ "DEPRECATIONDATES": {
+ "ADAPTATIONDATETIME": "2022-01-15T00:00:00Z", // last date this model can be used for adaptation
+ "TRANSCRIPTIONDATETIME": "2023-03-01T21:27:29Z" // last date this model can be used for decoding
+ }
+ },
+ "LASTACTIONDATETIME": "2021-03-01T21:27:40Z",
+ "STATUS": "SUCCEEDED",
+ "CREATEDDATETIME": "2021-03-01T21:27:29Z",
+ "LOCALE": "EN-US",
+ "DISPLAYNAME": "EXAMPLE MODEL",
+ "DESCRIPTION": "",
+ "CUSTOMPROPERTIES": {
+ "PORTALAPIVERSION": "3"
+ }
+}
+```
+Note that you can upgrade the model on a custom speech endpoint without downtime by changing the model used by the endpoint in the deployment section of the Speech Studio, or via the custom speech API.
+
+## Next steps
+
+* [Train and deploy a model](how-to-custom-speech-train-model.md)
+
+## Additional resources
+
+* [Prepare and test your data](./how-to-custom-speech-test-and-train.md)
+* [Inspect your data](how-to-custom-speech-inspect-data.md)
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
The **Training** table displays a new entry that corresponds to the new model. T
See the [how-to](how-to-custom-speech-evaluate-data.md) on evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model to get a realistic sense of the model's performance. > [!NOTE]
-> Both base models and custom models can be used only up to a certain date (see [Model lifecycle](custom-speech-overview.md#model-lifecycle)). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
+> Both base models and custom models can be used only up to a certain date (see [Model and Endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
> > Retrain your model using the then most recent base model to benefit from accuracy improvements and to avoid that your model expires.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Neural voices can be used to make interactions with chatbots and voice assistant
| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` | General | | English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General | | English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General |
-| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General |
+| English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-GuyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
| Finnish (Finland) | `fi-FI` | Female | `fi-FI-NooraNeural` | General | | Finnish (Finland) | `fi-FI` | Female | `fi-FI-SelmaNeural` <sup>New</sup> | General | | Finnish (Finland) | `fi-FI` | Male | `fi-FI-HarriNeural` <sup>New</sup> | General |
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
This response has been truncated to illustrate the structure of a response.
```json [
- {
- "Name": "Microsoft Server Speech Text to Speech Voice (ar-EG, Hoda)",
- "DisplayName": "Hoda",
- "LocalName": "هدى",
- "ShortName": "ar-EG-Hoda",
- "Gender": "Female",
- "Locale": "ar-EG",
- "SampleRateHertz": "16000",
- "VoiceType": "Standard",
- "Status": "GA"
- },
-
-...
-
+
{ "Name": "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)", "DisplayName": "Aria",
This response has been truncated to illustrate the structure of a response.
}, ...
+
+ {
+ "Name": "Microsoft Server Speech Text to Speech Voice (ar-EG, Hoda)",
+ "DisplayName": "Hoda",
+ "LocalName": "هدى",
+ "ShortName": "ar-EG-Hoda",
+ "Gender": "Female",
+ "Locale": "ar-EG",
+ "SampleRateHertz": "16000",
+ "VoiceType": "Standard",
+ "Status": "GA"
+ },
+
+...
+ ] ```
If the HTTP status is `200 OK`, the body of the response contains an audio file
- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/) - [Asynchronous synthesis for long-form audio](./long-audio-api.md)-- [Get started with Custom Voice](how-to-custom-voice.md)
+- [Get started with Custom Voice](how-to-custom-voice.md)
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Previously updated : 02/24/2021 Last updated : 03/15/2021
In the table below Parameters without "Adjustable" row are **not** adjustable fo
| **Websocket specific quotas** | | | |Max Audio length produced per turn | 10 min | 10 min | |Max SSML Message size per turn |64 KB |64 KB |
-| **REST API limit** | 20 requests per minute | 25 requests per 5 seconds |
+| **REST API limit** | 20 requests per minute | 300 requests per minute |
<sup>3</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The `voice` element is required. It is used to specify the voice that is used fo
**Example** > [!NOTE]
-> This example uses the `en-US-AriaRUS` voice. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech).
+> This example uses the `en-US-JennyNeural` voice. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech).
```XML <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
This is the text that is spoken. </voice> </speak>
speechConfig!.setPropertyTo(
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
Good morning! </voice>
- <voice name="en-US-Guy24kRUS">
- Good morning to you too Aria!
+ <voice name="en-US-GuyNeural">
+ Good morning to you too Jenny!
</voice> </speak> ```
The `s` element may contain text and the following elements: `audio`, `break`, `
```XML <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<p> <s>Introducing the sentence element.</s> <s>Used to mark individual sentences.</s>
Phonetic alphabets are composed of phones, which are made up of letters, numbers
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<phoneme alphabet="ipa" ph="t&#x259;mei&#x325;&#x27E;ou&#x325;"> tomato </phoneme> </voice> </speak>
Phonetic alphabets are composed of phones, which are made up of letters, numbers
```xml <speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<phoneme alphabet="sapi" ph="iy eh n y uw eh s"> en-US </phoneme> </voice> </speak>
Phonetic alphabets are composed of phones, which are made up of letters, numbers
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<s>His name is Mike <phoneme alphabet="ups" ph="JH AU"> Zhou </phoneme></s> </voice> </speak>
After you've published your custom lexicon, you can reference it from your SSML.
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<lexicon uri="http://www.example.com/customlexicon.xml"/> BTW, we will be there probably at 8:00 tomorrow morning. Could you help leave a message to Robert Benigni for me?
Volume changes can be applied to standard voices at the word or sentence-level.
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<prosody volume="+20.00%"> Welcome to Microsoft Cognitive Services Text-to-Speech API. </prosody>
The speech synthesis engine speaks the following example as "Your first request
```XML <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<p> Your <say-as interpret-as="ordinal"> 1st </say-as> request was for <say-as interpret-as="cardinal"> 1 </say-as> room on <say-as interpret-as="date" format="mdy"> 10/19/2010 </say-as>, with early arrival at <say-as interpret-as="time" format="hms12"> 12:35pm </say-as>.
Any audio included in the SSML document must meet these requirements:
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-AriaRUS">
+ <voice name="en-US-JennyNeural">
<p> <audio src="https://contoso.com/opinionprompt.wav"/> Thanks for offering your opinion. Please begin speaking after the beep.
Only one background audio file is allowed per SSML document. However, you can in
```xml <speak version="1.0" xml:lang="en-US" xmlns:mstts="http://www.w3.org/2001/mstts"> <mstts:backgroundaudio src="https://contoso.com/sample.wav" volume="0.7" fadein="3000" fadeout="4000"/>
- <voice name="Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)">
+ <voice name="Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)">
The text provided in this document will be spoken over the background audio. </voice> </speak>
cognitive-services Cognitive Services Apis Create Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account-cli.md
Title: Create a Cognitive Services resource using the Azure CLI
-description: Get started with Azure Cognitive Services by creating and subscribing to a resource using the Azure command line interface.
+description: Get started with Azure Cognitive Services by creating and subscribing to a resource using the Azure command-line interface.
When creating a new resource, you will need to know the "kind" of service you wa
| Form Recognizer | `FormRecognizer` | | Ink Recognizer | `InkRecognizer` |
-### Search
-
-| Service | Kind |
-|--|--|
-| Bing Autosuggest | `Bing.Autosuggest.v7` |
-| Bing Custom Search | `Bing.CustomSearch` |
-| Bing Entity Search | `Bing.EntitySearch` |
-| Bing Search | `Bing.Search.v7` |
-| Bing Spell Check | `Bing.SpellCheck.v7` |
- ### Speech | Service | Kind |
Use the [az cognitiveservices account keys list](/cli/azure/cognitiveservices/ac
Pricing tiers (and the amount you get billed) are based on the number of transactions you send using your authentication information. Each pricing tier specifies the: * maximum number of allowed transactions per second (TPS). * service features enabled within the pricing tier.
-* The cost for a predefined amount of transactions. Going above this amount will cause an extra charge as specified in the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for your service.
+* The cost for a predefined number of transactions. Going above this amount will cause an extra charge as specified in the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for your service.
## Get current quota usage for your resource
cognitive-services Cognitive Services Apis Create Account Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account-client-library.md
Previously updated : 09/14/2020 Last updated : 03/15/2021 zone_pivot_groups: programming-languages-set-ten
zone_pivot_groups: programming-languages-set-ten
Use this quickstart to create and manage Azure Cognitive Services resources using the Azure Management client library.
-Azure Cognitive Services are cloud-base services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
+Azure Cognitive Services is a family of cloud-base services with REST APIs and client libraries available to help developers build cognitive intelligence into their applications. Developers do not need direct artificial intelligence (AI) or data science skills or knowledge to achieve success. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
Individual AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications.
Individual AI services are represented by Azure [resources](../azure-resource-ma
[!INCLUDE [Python SDK quickstart](includes/quickstarts/management-python.md)]
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account.md
- Title: "Create a Cognitive Services resource in the Azure portal"-
-description: Get started with Azure Cognitive Services by creating and subscribing to a resource in the Azure portal.
---
-keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services
-- Previously updated : 09/14/2020---
-# Quickstart: Create a Cognitive Services resource using the Azure portal
-
-Use this quickstart to start using Azure Cognitive Services. After creating a Cognitive Service resource in the Azure portal, you'll get an endpoint and a key for authenticating your applications.
-
-Azure Cognitive Services are cloud-based services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
---
-## Prerequisites
-
-* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
-
-## Create a new Azure Cognitive Services resource
-
-1. Create a resource.
-
- #### [Multi-service resource](#tab/multiservice)
-
- The multi-service resource is named **Cognitive Services** in the portal. [Create a Cognitive Services resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne).
-
- At this time, the multi-service resource enables access to the following Cognitive
-
- - Computer Vision
- - Content Moderator
- - Face
- - Language Understanding (LUIS)
- - Text Analytics
- - Translator
- - Bing Search v7 <br>(Web, Image, News, Video, Visual)
- - Bing Custom Search
- - Bing Entity Search
- - Bing Autosuggest
- - Bing Spell Check
-
- #### [Single-service resource](#tab/singleservice)
-
- Use the below links to create a resource for the available Cognitive
-
- | Vision | Speech | Language | Decision | Search |
- |--|-|--|-||
- | [Computer vision](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) | [Speech Services](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) | [Immersive reader](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesImmersiveReader) | [Anomaly Detector](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) | [Bing Search API V7](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesBingSearch-v7) |
- | [Custom vision service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision) | [Speaker Recognition](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeakerRecognition) | [Language Understanding (LUIS)](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne) | [Content Moderator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) | [Bing Custom Search](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesBingCustomSearch) |
- | [Face](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFace) | | [QnA Maker](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) | [Personalizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) | [Bing Entity Search](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesBingEntitySearch) |
- | [Ink Recognizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesInkRecognizer) | | [Text Analytics](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) | [Metrics Advisor](https://go.microsoft.com/fwlink/?linkid=2142156) | [Bing Spell Check](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesBingSpellCheck-v7) |
- | | | [Translator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) | | [Bing Autosuggest](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesBingAutosuggest-v7) |
-
- ***
-
-3. On the **Create** page, provide the following information:
-
- #### [Multi-service resource](#tab/multiservice)
-
- | | |
- |--|--|
- | **Name** | A descriptive name for your cognitive services resource. For example, *MyCognitiveServicesResource*. |
- | **Subscription** | Select one of your available Azure subscriptions. |
- | **Location** | The location of your cognitive service instance. Different locations may introduce latency, but have no impact on the runtime availability of your resource. |
- | **Pricing tier** | The cost of your Cognitive Services account depends on the options you choose and your usage. For more information, see the API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/).
- | **Resource group** | The Azure resource group that will contain your Cognitive Services resource. You can create a new group or add it to a pre-existing group. |
-
- ![Multi-service resource resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen-multi.png)
-
- Click **Create**.
-
- #### [Single-service resource](#tab/singleservice)
-
- | | |
- |--|--|
- | **Name** | A descriptive name for your cognitive services resource. For example, *TextAnalyticsResource*. |
- | **Subscription** | Select one of your available Azure subscriptions. |
- | **Location** | The location of your cognitive service instance. Different locations may introduce latency, but have no impact on the runtime availability of your resource. |
- | **Pricing tier** | The cost of your Cognitive Services account depends on the options you choose and your usage. For more information, see the API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/).
- | **Resource group** | The Azure resource group that will contain your Cognitive Services resource. You can create a new group or add it to a pre-existing group. |
-
- ![Single-service resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen.png)
-
- Click **Create**.
-
- ***
--
-## Get the keys for your resource
-
-1. After your resource is successfully deployed, click on **Go to resource** under **Next Steps**.
-
- ![Search for Cognitive Services](media/cognitive-services-apis-create-account/resource-next-steps.png)
-
-2. From the quickstart pane that opens, you can access your key and endpoint.
-
- ![Get key and endpoint](media/cognitive-services-apis-create-account/get-cog-serv-keys.png)
--
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources contained in the group.
-
-1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups.
-2. Locate the resource group containing the resource to be deleted
-3. Right-click on the resource group listing. Select **Delete resource group**, and confirm.
-
-## See also
-
-* [Authenticate requests to Azure Cognitive Services](authentication.md)
-* [What is Azure Cognitive Services?](./what-are-cognitive-services.md)
-* [Create a new resource using the Azure Management client library](.\cognitive-services-apis-create-account-client-library.md)
-* [Natural language support](language-support.md)
-* [Docker container support](cognitive-services-container-support.md)
+
+ Title: "Create a Cognitive Services resource in the Azure portal"
+
+description: Get started with Azure Cognitive Services by creating and subscribing to a resource in the Azure portal.
+++
+keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services
++ Last updated : 03/15/2021+++
+# Quickstart: Create a Cognitive Services resource using the Azure portal
+
+Use this quickstart to start using Azure Cognitive Services. After creating a Cognitive Service resource in the Azure portal, you'll get an endpoint and a key for authenticating your applications.
+
+Azure Cognitive Services are cloud-based services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
++
+## Prerequisites
+
+* A valid Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+
+## Create a new Azure Cognitive Services resource
+
+1. Create a resource.
+
+### [Multi-service resource](#tab/multiservice)
+
+The multi-service resource is named **Cognitive Services** in the portal. [Create a Cognitive Services resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne).
+
+At this time, the multi-service resource enables access to the following Cognitive
+
+* Computer Vision
+* Content Moderator
+* Face
+* Language Understanding (LUIS)
+* Text Analytics
+* Translator
+
+### [Single-service resource](#tab/singleservice)
+
+Use the below links to create a resource for the available Cognitive
+
+| Vision | Speech | Language | Decision |
+|--|-|--|-|
+| [Computer vision](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) | [Speech Services](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) | [Immersive reader](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesImmersiveReader) | [Anomaly Detector](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) |
+| [Custom vision service](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision) | [Speaker Recognition](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeakerRecognition) | [Language Understanding (LUIS)](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne) | [Content Moderator](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) |
+| [Face](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFace) | | [QnA Maker](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker) | [Personalizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer) |
+| [Ink Recognizer](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesInkRecognizer) | | [Text Analytics](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) | [Metrics Advisor](https://go.microsoft.com/fwlink/?linkid=2142156) |
+++
+2. On the **Create** page, provide the following information:
+<!-- markdownlint-disable MD024 -->
+
+### [Multi-service resource](#tab/multiservice)
+
+|Project details| Description |
+|--|--|
+| **Subscription** | Select one of your available Azure subscriptions. |
+| **Resource group** | The Azure resource group that will contain your Cognitive Services resource. You can create a new group or add it to a pre-existing group. |
+| **Region** | The location of your cognitive service instance. Different locations may introduce latency, but have no impact on the runtime availability of your resource. |
+| **Name** | A descriptive name for your cognitive services resource. For example, *MyCognitiveServicesResource*. |
+| **Pricing tier** | The cost of your Cognitive Services account depends on the options you choose and your usage. For more information, see the API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/).
+
+![Multi-service resource resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen-multi.png)
+
+Select **Create**.
+
+### [Single-service resource](#tab/singleservice)
+
+|Project details| Description |
+|--|--|
+| **Subscription** | Select one of your available Azure subscriptions. |
+| **Resource group** | The Azure resource group that will contain your Cognitive Services resource. You can create a new group or add it to a pre-existing group. |
+| **Region** | The location of your cognitive service instance. Different locations may introduce latency, but have no impact on the runtime availability of your resource. |
+| **Name** | A descriptive name for your cognitive services resource. For example, *MyCognitiveServicesResource*. |
+| **Pricing tier** | The cost of your Cognitive Services account depends on the options you choose and your usage. For more information, see the API [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/).
+
+![Single-service resource creation screen](media/cognitive-services-apis-create-account/resource_create_screen.png)
+
+Select **Create**.
++++
+## Get the keys for your resource
+
+1. After your resource is successfully deployed, click on **Go to resource** under **Next Steps**.
+
+ ![Search for Cognitive Services](media/cognitive-services-apis-create-account/resource-next-steps.png)
+
+2. From the quickstart pane that opens, you can access your key and endpoint.
+
+ ![Get key and endpoint](media/cognitive-services-apis-create-account/get-cog-serv-keys.png)
++
+## Clean up resources
+
+If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources contained in the group.
+
+1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups.
+2. Locate the resource group containing the resource to be deleted
+3. Right-click on the resource group listing. Select **Delete resource group**, and confirm.
+
+## See also
+
+* [Authenticate requests to Azure Cognitive Services](authentication.md)
+* [What is Azure Cognitive Services?](./what-are-cognitive-services.md)
+* [Create a new resource using the Azure Management client library](.\cognitive-services-apis-create-account-client-library.md)
+* [Natural language support](language-support.md)
+* [Docker container support](cognitive-services-container-support.md)
cognitive-services Build Training Data Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/build-training-data-set.md
Title: "How to build a training data set for a custom model - Form Recognizer" description: Learn how to ensure your training data set is optimized for training a Form Recognizer model.-+ Last updated 06/19/2019-+ #Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way. # Build a training data set for a custom model
-When you use the Form Recognizer custom model, you provide your own training data to the [Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
+When you use the Form Recognizer custom model, you provide your own training data to the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
You need at least five filled-in forms of the same type.
If you want to use manually labeled data, you'll also have to upload the *.label
### Organize your data in subfolders (optional)
-By default, the [Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/TrainCustomModelAsync) API will only use form documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
+By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/TrainCustomModelAsync) API will only use form documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
```json {
cognitive-services Concept Business Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-business-cards.md
Title: Business cards - Form Recognizer
description: Learn concepts related to business card analysis with the Form Recognizer API - usage and limits. -+ - Previously updated : 08/17/2019- Last updated : 03/15/2021+ # Form Recognizer prebuilt business cards model
The Business Card API can also return all recognized text from the Business Card
### Input Requirements ## The Analyze Business Card operation
-The [Analyze Business Card](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync) takes an image or PDF of a business card as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
+The [Analyze Business Card](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeBusinessCardAsync) takes an image or PDF of a business card as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
|Response header| Result URL | |:--|:-|
-|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/businessCard/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.3/prebuilt/businessCard/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
## The Get Analyze Business Card Result operation
-The second step is to call the [Get Analyze Business Card Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeBusinessCardResult) operation. This operation takes as input the Result ID that was created by the Analyze Business Card operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+The second step is to call the [Get Analyze Business Card Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/GetAnalyzeBusinessCardResult) operation. This operation takes as input the Result ID that was created by the Analyze Business Card operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
|Field| Type | Possible values | |:--|:-:|:-| |status | string | notStarted: The analysis operation has not started.<br /><br />running: The analysis operation is in progress.<br /><br />failed: The analysis operation has failed.<br /><br />succeeded: The analysis operation has succeeded.|
-When the **status** field has the **succeeded** value, the JSON response will include the business card understanding and optional text recognition results, if requested. The business card understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence and corresponding word elements. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
+When the **status** field has the **succeeded** value, the JSON response will include the business card understanding and optional text recognition results, if requested. The business card understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence, and corresponding word elements. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
![sample business card output](./media/business-card-results.png)
Follow the [quickstart](./QuickStarts/client-library.md) quickstart to implement
## Customer Scenarios
-The data extracted with the Business Card API can be used to perform a variety of tasks. Extracting this contact info automatically saves time for those in client-facing roles. The following are a few examples of what our customers have accomplished with the Business Card API:
+The data extracted with the Business Card API can be used to perform various tasks. Extracting this contact info automatically saves time for users in client-facing roles. The following are a few examples of what our customers have accomplished with the Business Card API:
* Extract contact info from Business cards and quickly create phone contacts. * Integrate with CRM to automatically create contact using business card images.
The Business Card API also powers the [AI Builder Business Card Processing featu
## See also * [What is Form Recognizer?](./overview.md)
-* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync)
+* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeBusinessCardAsync)
cognitive-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-custom.md
+
+ Title: Custom models - Form Recognizer
+
+description: Learn concepts related to Form Recognizer API custom models- usage and limits.
+++++++ Last updated : 03/15/2021+++
+# Form Recognizer custom models
+
+Form Recognizer uses advanced machine learning technology to analyze and extract data from your forms and documents. A Form Recognizer model is a representation of extracted data that is used as a reference for analyzing your specific content. There are two types of Form recognizer models:
+
+* **Custom models**. Form Recognizer custom models represent extracted data from _forms_ specific to your business. Custom models must be trained to analyze your distinct form data.
+
+* **Prebuilt models**. Form Recognizer currently supports prebuilt models for _receipts, business cards, identification cards_, and _invoices_. Prebuilt models detect and extract information from document images and return the extracted data in a structured JSON output.
+
+## What does a custom model do?
+
+With Form Recognizer, you can train a model that will extract information from forms that are relevant for your use case. You only need five examples of the same form type to get started. Your custom model can be trained with or without labeled datasets.
+
+## Create, use, and manage your custom model
+
+At a high level, the steps for building, training, and using your custom model are as follows:
+
+> [!div class="nextstepaction"]
+> [1. Assemble your training dataset](build-training-data-set.md#custom-model-input-requirements)
+
+Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types and contain both text and handwriting. Your forms must be of the same type of document and follow the [input requirements](build-training-data-set.md#custom-model-input-requirements) for Form Recognizer.
+&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155;
+
+> [!div class="nextstepaction"]
+> [2. Upload your training dataset](build-training-data-set.md#upload-your-training-data)
+
+You'll need to upload your training data to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). Use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155;
+> [!div class="nextstepaction"]
+> [3. Train your custom model](quickstarts/client-library.md#train-a-custom-model)
+
+You can train your model [without](quickstarts/client-library.md#train-a-model-without-labels) or [with](quickstarts/client-library.md#train-a-model-with-labels) labeled data sets. Unlabeled datasets rely solely on the Layout API to detect and identify key information without added human input. Labeled datasets also rely on the Layout API, but supplementary human input is included such as your specific labels and field locations. To use both labeled and unlabeled data, start with at least five completed forms of the same type for the labeled training data and then add unlabeled data to the required data set.
+&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155;
+
+>[!div class="nextstepaction"]
+> [4. Analyze documents with your custom model](quickstarts/client-library.md#analyze-forms-with-a-custom-model)
+
+Test your newly trained model by using a form that wasn't part of the training dataset. You can continue to do further training to improve the performance of your custom model.
+&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155;
+
+> [!div class="nextstepaction"]
+> [5. Manage your custom models](quickstarts/client-library.md#manage-custom-models)
+
+At any time, you can view a list of all the custom models under your subscription, retrieve information about a specific custom model, or delete a custom model from your account.
+
+## Next steps
+
+View **[Form Recognizer API reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9843c2794cbb1a96291)** documentation to learn more.
+>
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
+
+ Title: IDs - Form Recognizer
+
+description: Learn concepts related to data extraction from identity documents with the Form Recognizer Pre-built IDs API.
+++++++ Last updated : 03/15/2021+++
+# Form Recognizer prebuilt identification card (ID) model
+
+Azure Form Recognizer can analyze and extract information from government identification cards (IDs) using its prebuilt IDs model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with ID recognition capabilities to extract key information from Worldwide Passports and U.S. Driver's Licenses (all 50 states and D.C.). The IDs API extracts key information from these identity documents, such as first name, last name, date of birth, document number, and more. This API is available in the Form Recognizer v2.1 preview as a cloud service and as an on-premise container.
+
+## What does the ID service do?
+
+The prebuilt IDs service extracts the key values from worldwide passports and U.S. Driver's Licenses and returns them in an organized structured JSON response.
+
+![Sample Driver's License](./media/id-example-drivers-license.JPG)
+
+![Sample Passport](./media/id-example-passport-result.JPG)
+
+### Fields extracted
+
+|Name| Type | Description | Value |
+|:--|:-|:-|:-|
+| Country | country | Country code compliant with ISO 3166 standard | "USA" |
+| DateOfBirth | date | DOB in YYYY-MM-DD format | "1980-01-01" |
+| DateOfExpiration | date | Expiration date in YYYY-MM-DD format | "2019-05-05" |
+| DocumentNumber | string | Relevant passport number, driver's license number, etc. | "340020013" |
+| FirstName | string | Extracted given name and middle initial if applicable | "JENNIFER" |
+| LastName | string | Extracted surname | "BROOKS" |
+| Nationality | country | Country code compliant with ISO 3166 standard | "USA" |
+| Sex | gender | Possible extracted values include "M", "F" and "X" | "F" |
+| MachineReadableZone | object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
+| DocumentType | string | Document type, for example, Passport, Driver's License | "passport" |
+| Address | string | Extracted address (Driver's License only) | "123 STREET ADDRESS YOUR CITY WA 99999-1234"|
+| Region | string | Extracted region, state, province, etc. (Driver's License only) | "Washington" |
+
+### Additional features
+
+The IDs API also returns the following information:
+
+* Field confidence level (each field returns an associated confidence value)
+* OCR raw text (OCR-extracted text output for the entire receipt)
+* Bounding box of each extracted field in U.S. Driver's Licenses
+* Bounding box for Machine Readable Zone (MRZ) on Passports
+
+ > [!NOTE]
+ > Pre-built IDs does not detect ID authenticity
+ >
+ > Form Recognizer Pre-built IDs extracts key data from ID data. However, it does not detect the validity or authenticity of the original identity document.
+
+## Try it out
+
+To try out the Form Recognizer IDs service, go to the online Sample UI Tool:
+
+> [!div class="nextstepaction"]
+> [Try Prebuilt Models](https://fott-preview.azurewebsites.net/)
+
+## Input requirements
++
+## Supported ID types
+
+* **Pre-built IDs v2.1-preview.3** Extracts key values from worldwide passports, and U.S. Driver's Licenses.
+
+ > [!NOTE]
+ > ID type support
+ >
+ > Currently supported ID types include worldwide passport and U.S. Driver's Licenses. We are actively seeking to expand our ID support to other identity documents around the world.
+
+## POST Analyze Id Document
+
+The [Analyze ID](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5f74a7daad1f2612c46f5822) operation takes an image or PDF of an ID as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
+
+|Response header| Result URL |
+|:--|:-|
+|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.3/prebuilt/idDocument/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+
+## GET Analyze Id Document Result
+
+<!
+Need to update this with updated APIM links when available
+-->
+
+The second step is to call the [**Get Analyze idDocument Result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/GetAnalyzeFormResult) operation. This operation takes as input the Result ID that was created by the Analyze ID operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+
+|Field| Type | Possible values |
+|:--|:-:|:-|
+|status | string | notStarted: The analysis operation has not started. |
+| | | running: The analysis operation is in progress. |
+| | | failed: The analysis operation has failed. |
+| | | succeeded: The analysis operation has succeeded. |
+
+When the **status** field has the **succeeded** value, the JSON response will include the receipt understanding and text recognition results. The IDs result are organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence, and corresponding word elements. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
+
+![sample receipt results](./media/id-example-passport-result.JPG)
+
+### Sample JSON output
+
+See the following example of a successful JSON response:
+The `readResults` node contains all of the recognized text. Text is organized by page, then by line, then by individual words. The `documentResults` node contains the ID values that the model discovered. This node is also where you'll find useful key/value pairs like the first name, last name, document number, and more.
+
+```json
+{
+ "status": "succeeded",
+ "createdDateTime": "2021-03-04T22:29:33Z",
+ "lastUpdatedDateTime": "2021-03-04T22:29:36Z",
+ "analyzeResult": {
+ "version": "2.1.0",
+ "readResults": [
+ {
+ "page": 1,
+ "angle": 0.3183,
+ "width": 549,
+ "height": 387,
+ "unit": "pixel",
+ "lines": [
+ {
+ "text": "PASSPORT",
+ "boundingBox": [
+ 57,
+ 10,
+ 120,
+ 11,
+ 119,
+ 22,
+ 57,
+ 22
+ ],
+ "words": [
+ {
+ "text": "PASSPORT",
+ "boundingBox": [
+ 57,
+ 11,
+ 119,
+ 11,
+ 118,
+ 23,
+ 57,
+ 22
+ ],
+ "confidence": 0.994
+ }
+ ],
+ ...
+ }
+ ],
+
+ "documentResults": [
+ {
+ "docType": "prebuilt:idDocument:passport",
+ "docTypeConfidence": 0.995,
+ "pageRange": [
+ 1,
+ 1
+ ],
+ "fields": {
+ "Country": {
+ "type": "country",
+ "valueCountry": "USA",
+ "text": "USA"
+ },
+ "DateOfBirth": {
+ "type": "date",
+ "valueDate": "1980-01-01",
+ "text": "800101"
+ },
+ "DateOfExpiration": {
+ "type": "date",
+ "valueDate": "2019-05-05",
+ "text": "190505"
+ },
+ "DocumentNumber": {
+ "type": "string",
+ "valueString": "340020013",
+ "text": "340020013"
+ },
+ "FirstName": {
+ "type": "string",
+ "valueString": "JENNIFER",
+ "text": "JENNIFER"
+ },
+ "LastName": {
+ "type": "string",
+ "valueString": "BROOKS",
+ "text": "BROOKS"
+ },
+ "Nationality": {
+ "type": "country",
+ "valueCountry": "USA",
+ "text": "USA"
+ },
+ "Sex": {
+ "type": "gender",
+ "valueGender": "F",
+ "text": "F"
+ },
+ "MachineReadableZone": {
+ "type": "object",
+ "text": "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816",
+ "boundingBox": [
+ 16,
+ 314.1,
+ 504.2,
+ 317,
+ 503.9,
+ 363,
+ 15.7,
+ 360.1
+ ],
+ "page": 1,
+ "confidence": 0.384,
+ "elements": [
+ "#/readResults/0/lines/33/words/0",
+ "#/readResults/0/lines/33/words/1",
+ "#/readResults/0/lines/33/words/2",
+ "#/readResults/0/lines/33/words/3",
+ "#/readResults/0/lines/33/words/4",
+ "#/readResults/0/lines/34/words/0"
+ ]
+ },
+ "DocumentType": {
+ "type": "string",
+ "text": "passport",
+ "confidence": 0.995
+ }
+ }
+ }
+ ]
+ }
+}
+```
++
+## Next steps
+
+- Try your own IDs and samples in the [Form Recognizer Sample UI](https://fott-preview.azurewebsites.net/).
+- Complete a [Form Recognizer quickstart](quickstarts/client-library.md) to get started writing an ID processing app with Form Recognizer in the development language of your choice.
+
+## See also
+
+* [**What is Form Recognizer?**](./overview.md)
+* [**REST API reference docs**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm)
cognitive-services Concept Invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-invoices.md
Title: Invoices - Form Recognizer
description: Learn concepts related to invoice analysis with the Form Recognizer API - usage and limits. -+ Previously updated : 11/18/2020- Last updated : 03/15/2021+ # Form Recognizer prebuilt invoice model
-Azure Form Recognizer can analyze and extract information from sales invoices using its prebuilt invoice models. The Invoice API enables customers to take invoices in a variety of formats and return structured data to automate the invoice processing. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, and more. The prebuilt Invoice API is publicly available in the Form Recognizer v2.1 preview.
+Azure Form Recognizer can analyze and extract information from sales invoices using its prebuilt invoice models. The Invoice API enables customers to take invoices in a variety of formats and return structured data to automate the invoice processing. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, invoice amount due, tax amount, ship to, bill to, line items and more. The prebuilt Invoice API is publicly available in the Form Recognizer v2.1 preview.
## What does the Invoice service do?
-The Invoice API extracts key fields from invoices and returns them in an organized structured JSON response. Invoices can be from a variety of formats and quality, including phone-captured images, scanned documents, and digital PDFs. The invoice API will extract the structured output from all of these invoices.
+The Invoice API extracts key fields and line items from invoices and returns them in an organized structured JSON response. Invoices can be from a variety of formats and quality, including phone-captured images, scanned documents, and digital PDFs. The invoice API will extract the structured output from all of these invoices.
-![Contoso invoice example](./media/invoice-example.jpg)
+![Contoso invoice example](./media/invoice-example-new.jpg)
## Try it out
To try out the Form Recognizer Invoice Service, go to the online Sample UI Tool:
You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Invoice service.
-![Analyzed invoice example](./media/analyze-invoice.png)
-
-### Input requirements
+### Input requirements
[!INCLUDE [input requirements](./includes/input-requirements-receipts.md)] ## The Analyze Invoice operation
-The [Analyze Invoice](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/5ed8c9843c2794cbb1a96291) operation takes an image or PDF of an invoice as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
+The [Analyze Invoice](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9843c2794cbb1a96291) operation takes an image or PDF of an invoice as the input and extracts the values of interest. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
|Response header| Result URL | |:--|:-|
-|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/invoice/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
+|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.3/prebuilt/invoice/analyzeResults/49a36324-fc4b-4387-aa06-090cfbf0064f` |
## The Get Analyze Invoice Result operation
-The second step is to call the [Get Analyze Invoice Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/5ed8c9acb78c40a2533aee83) operation. This operation takes as input the Result ID that was created by the Analyze Invoice operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+The second step is to call the [Get Analyze Invoice Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9acb78c40a2533aee83) operation. This operation takes as input the Result ID that was created by the Analyze Invoice operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
|Field| Type | Possible values | |:--|:-:|:-| |status | string | notStarted: The analysis operation has not started.<br /><br />running: The analysis operation is in progress.<br /><br />failed: The analysis operation has failed.<br /><br />succeeded: The analysis operation has succeeded.|
-When the **status** field has the **succeeded** value, the JSON response will include the invoice understanding results, tables extracted and optional text recognition results, if requested. The invoice understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence and corresponding word elements. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
+When the **status** field has the **succeeded** value, the JSON response will include the invoice understanding results, tables extracted and optional text recognition results, if requested. The invoice understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence and corresponding word elements. It also includes the line items extracted where each line item contains the amount, description, unitPrice, quantity etc. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
### Sample JSON output The response to the Get Analyze Invoice Result operation will be the structured representation of the invoice with all the information extracted.
-See here for a [sample invoice file](./media/sample-invoice.jpg) and its structured output [sample invoice output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-invoice-output.json).
+See here for a [sample invoice file](media/sample-invoice.jpg) and its structured output [sample invoice output](media/invoice-example-new.jpg).
The JSON output has 3 parts: * `"readResults"` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words. * `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence and a reference to the lines and words in "readResults".
-* `"documentResults"` node contains the invoice specific values that the model discovered. This is where you'll find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total and lots more.
+* `"documentResults"` node contains the invoice specific values and line items that the model discovered. This is where you'll find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more.
## Example output
-The Invoice service will extract the text, tables and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the output below uses this [sample invoice](./media/sample-invoice.jpg))
+The Invoice service will extract the text, tables and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the output below uses this [sample invoice](media/sample-invoice.jpg)).
|Name| Type | Description | Text | Value (standardized output) | |:--|:-|:-|:-| :-| | CustomerName | string | Customer being invoiced | Microsoft Corp | | | CustomerId | string | Reference ID for the customer | CID-12345 | |
-| PurchaseOrder | string | A purchase order reference number | PO-3333 | | |
-| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | | |
-| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
+| PurchaseOrder | string | A purchase order reference number | PO-3333 | |
+| InvoiceId | string | ID for this specific invoice (often "Invoice Number") | INV-100 | |
+| InvoiceDate | date | Date the invoice was issued | 11/15/2019 | 2019-11-15 |
| DueDate | date | Date payment for this invoice is due | 12/15/2019 | 2019-12-15 | | VendorName | string | Vendor who has created this invoice | CONTOSO LTD. | | | VendorAddress | string | Mailing address for the Vendor | 123 456th St New York, NY, 10001 | |
The Invoice service will extract the text, tables and 26 invoice fields. Followi
| ServiceEndDate | date | End date for the service period (for example, a utility bill service period) | 11/14/2019 | 2019-11-14 | | PreviousUnpaidBalance | number | Explicit previously unpaid balance | $500.00 | 500 |
+Following are the line items extracted from an invoice in the JSON output response (the output below uses this [sample invoice](./media/sample-invoice.jpg))
+
+|Name| Type | Description | Text (line item #1) | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| Items | string | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
+| Amount | number | The amount of the line item | $60.00 | 100 |
+| Description | string | The text description for the invoice line item | Consulting service | Consulting service |
+| Quantity | number | The quantity for this invoice line item | 2 | 2 |
+| UnitPrice | number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
+| ProductCode | string| Product code, product number, or SKU associated with the specific line item | A123 | |
+| Unit | string| The unit of the line item e.g kg, lb etc. | hours | |
+| Date | date| Date corresponding to each line item. Often this is a date the line item was shipped | 3/4/2021| 2021-03-04 |
+| Tax | number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
+ ## Next steps
The Invoice service will extract the text, tables and 26 invoice fields. Followi
## See also * [What is Form Recognizer?](./overview.md)
-* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/5ed8c9843c2794cbb1a96291)
+* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9843c2794cbb1a96291)
cognitive-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-layout.md
Title: Layouts - Form Recognizer
description: Learn concepts related to layout analysis with the Form Recognizer API - usage and limits. -+ Previously updated : 11/18/2020- Last updated : 03/15/2021+ # Form Recognizer Layout service
-Azure Form Recognizer can extract text, tables, selection marks, and structure information from documents using its Layout service. The Layout API enables customers to take documents in a variety of formats and return structured data and representation of the document. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with document understanding deep learning models to extract text, tables, selection marks, and structure of documents.
+Azure Form Recognizer can extract text, tables, selection marks, and structure information from documents using its Layout service. The Layout API enables customers to take documents in a variety of formats and return structured data representations of the documents. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with deep learning models to extract text, tables, selection marks, and document structure.
## What does the Layout service do?
-The Layout API extracts text, tables, selection marks, and structure information from documents with exceptional accuracy and returns them in an organized structured JSON response. Documents can be from a variety of formats and quality, including phone-captured images, scanned documents, and digital PDFs. The Layout API will extract the structured output from all of these documents.
+The Layout API extracts text, tables, selection marks, and structure information from documents with exceptional accuracy and returns an organized, structured, JSON response. Documents can be of a variety of formats and quality, including phone-captured images, scanned documents, and digital PDFs. The Layout API will accurately extract the structured output from all of these documents.
![Layout example](./media/layout-tool-example.JPG)
The Layout API extracts text, tables, selection marks, and structure information
To try out the Form Recognizer Layout Service, go to the online sample UI tool: > [!div class="nextstepaction"]
-> [Sample UI](https://fott-preview.azurewebsites.net/)
+> [Form OCR Test Tool (FOTT)](https://fott-preview.azurewebsites.net)
You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Layout API.
You will need an Azure subscription ([create one for free](https://azure.microso
## The Analyze Layout operation
-The [Analyze Layout](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeLayoutAsync) operation takes a document (image, TIFF, or PDF file) as the input and extracts the text, tables, selection marks, and structure of the document. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
+First, call the [Analyze Layout](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeLayoutAsync) operation. Analyze Layout takes a document (image, TIFF, or PDF file) as the input and extracts the text, tables, selection marks, and structure of the document. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
|Response header| Result URL | |:--|:-|
-|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.2/prebuilt/layout/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f` |
+|Operation-Location | `https://cognitiveservice/formrecognizer/v2.1-preview.3/prebuilt/layout/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f` |
+
+### Natural reading order output (Latin only)
+
+You can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
++
+### Select page numbers or ranges for text extraction
+
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
+ ## The Get Analyze Layout Result operation
-The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID that was created by the Analyze Layout operation. It returns a JSON response that contains a **status** field with the following possible values.
+The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID that was created by the Analyze Layout operation. It returns a JSON response that contains a **status** field with the following possible values.
|Field| Type | Possible values | |:--|:-:|:-| |status | string | `notStarted`: The analysis operation has not started.<br /><br />`running`: The analysis operation is in progress.<br /><br />`failed`: The analysis operation has failed.<br /><br />`succeeded`: The analysis operation has succeeded.|
-You call this operation iteratively until it returns with the `succeeded` value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+Call this operation iteratively until it returns the `succeeded` value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+
+When the **status** field has the `succeeded` value, the JSON response will include the extracted layout, text, tables, and selection marks. The extracted data includes extracted text lines and words, bounding boxes, text appearance with handwritten indication, tables, and selection marks with selected/unselected indicated.
+
+### Handwritten classification for text lines (Latin only)
-When the **status** field has the `succeeded` value, the JSON response will include the layout extraction results, text, tables, and selection marks extracted. The extracted data contains the extracted text lines and words, bounding box, text appearance handwritten indication, tables, and selection marks with an indication of selected/unselected.
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
+ ### Sample JSON output
-The response to the Get Analyze Layout Result operation will be the structured representation of the document with all the information extracted.
+The response to the *Get Analyze Layout Result* operation is a structured representation of the document with all the information extracted.
See here for a [sample document file](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout.pdf) and its structured output [sample layout output](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/tree/master/curl/form-recognizer/sample-layout-output.json).
-The JSON output has two parts:
-* `"readResults"` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words.
-* `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".
+The JSON output has two parts:
+
+* `readResults` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words.
+* `pageResults` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".
## Example Output ### Text
-Layout extracts text from documents (PDF, TIFF) and images (jpg, png, bmp) with different text angles, colors, angles, photos of documents, faxes, printed, handwritten (English only) and mixed modes. Text is extracted with information on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `"readResults"` section of the JSON output.
+Layout API extracts text from documents (PDF, TIFF) and images (JPG, PNG, BMP) with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted with information provided on lines, words, bounding boxes, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
### Tables
-Layout extracts tables from documents (PDF, TIFF) and images (jpg, png, bmp). Documents can be scanned, photographed, or digitized. Tables can be complex tables with merged cells or columns, with or without borders, and with odd angles. Extracted tables include the number of columns and rows, row span, and column span. Each cell is extracted with its bounding box and reference to the text extracted in the `"readResults"` section. Table information is located in the `"pageResults"` section of the JSON output.
+Layout API extracts tables from documents (PDF, TIFF) and images (JPG, PNG, BMP). Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell is extracted with its bounding box and reference to the text extracted in the `readResults` section. Table information is located in the `pageResults` section of the JSON output.
![Tables example](./media/tables-example.jpg) ### Selection marks
-Layout also extracts selection marks from documents. Selection marks extracted include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `"readResults"` section of the JSON output.
+Layout API also extracts selection marks from documents. Extracted selection marks include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `readResults` section of the JSON output.
## Next steps -- Try your own layout extraction using the [Form Recognizer Sample UI](https://fott-preview.azurewebsites.net/)-- Complete a [Form Recognizer quickstart](quickstarts/client-library.md) to get started extracting layouts in the development language of your choice.
+* Try your own layout extraction using the [Form Recognizer Sample UI tool](https://fott-preview.azurewebsites.net/)
+* Complete a [Form Recognizer quickstart](quickstarts/client-library.md) to get started extracting layouts in the development language of your choice.
## See also * [What is Form Recognizer?](./overview.md)
-* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeLayoutAsync)
+* [REST API reference docs](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeLayoutAsync)
cognitive-services Concept Receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-receipts.md
Title: Receipts - Form Recognizer
description: Learn concepts related to receipt analysis with the Form Recognizer API - usage and limits. -+ Previously updated : 08/17/2019- Last updated : 03/15/2021+ # Form Recognizer prebuilt receipt model
-Azure Form Recognizer can analyze and extract information from sales receipts using its prebuilt receipt model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with receipt understanding deep learning models to extract key information from receipts in English. The Receipt API extracts key information from sales receipts in English, such as merchant name, transaction date, transaction total, line items, and more.
+Azure Form Recognizer can analyze and extract information from sales receipts using its prebuilt receipt model. It combines our powerful [Optical Character Recognition (OCR)](../computer-vision/concept-recognizing-text.md) capabilities with deep learning models to extract key information from receipts written in English.
-## Understanding Receipts
+## Understanding Receipts
-Many businesses and individuals still rely on manually extracting data from their sales receipts, whether for business expense reports, reimbursements, auditing, tax purposes, budgeting, marketing or other purposes. Often in these scenarios, images of the physical receipt are required for validation purposes.
+Many businesses and individuals still rely on manually extracted data from sales receipts. Automatically extracting data from these receipts can be complicated. Receipts may be crumpled, hard to read, have handwritten parts and contain low-quality smartphone images. Also, receipt templates and fields can vary greatly by market, region, and merchant. These data extraction and field detection challenges make receipt processing a unique problem.
-Automatically extracting data from these Receipts can be complicated. Receipts may be crumpled and hard to read, printed or handwritten parts and smartphone images of receipts may be low quality. Also, receipt templates and fields can vary greatly by market, region, and merchant. These challenges in both data extraction and field detection make receipt processing a unique problem.
-
-Using Optical Character Recognition (OCR) and our prebuilt receipt model, the Receipt API enables these receipt processing scenarios and extract data from the receipts e.g merchant name, tip, total, line items and more. With this API there is no need to train a model, just send the receipt image to the Analyze Receipt API and the data is extracted.
+The Receipt API uses Optical Character Recognition (OCR) and our prebuilt model to enable vast receipt processing scenarios. With Receipt API there is no need to train a model. Send the receipt image to the Analyze Receipt API and the data is extracted.
![sample receipt](./media/receipts-example.jpg)
To try out the Form Recognizer receipt service, go to the online Sample UI Tool:
## Input requirements ## Supported locales * **Pre-built Receipt v2.0** (GA) supports sales receipts in the EN-US locale
-* **Pre-built Receipt v2.1-preview.2** (Public Preview) adds additional support for the following EN receipt locales:
+* **Pre-built Receipt v2.1-preview.3** (Public Preview) adds additional support for the following EN receipt locales:
* EN-AU * EN-CA * EN-GB
To try out the Form Recognizer receipt service, go to the online Sample UI Tool:
> [!NOTE] > Language input >
- > Prebuilt Receipt v2.1-preview.2 has an optional request parameter to specify a receipt locale from additional English markets. For sales receipts in English from Australia (EN-AU), Canada (EN-CA), Great Britain (EN-GB), and India (EN-IN), you can specify the locale to get improved results. If no locale is specified in v2.1-preview.2, the model will default to the EN-US model.
+ > Prebuilt Receipt v2.1-preview.3 has an optional request parameter to specify a receipt locale from additional English markets. For sales receipts in English from Australia (EN-AU), Canada (EN-CA), Great Britain (EN-GB), and India (EN-IN), you can specify the locale to get improved results. If no locale is specified in v2.1-preview.3, the model will default to the EN-US model.
## The Analyze Receipt operation
-The [Analyze Receipt](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeReceiptAsync) takes an image or PDF of a receipt as the input and extracts the values of interest and text. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
+The [Analyze Receipt](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeReceiptAsync) takes an image or PDF of a receipt as the input and extracts the values of interest and text. The call returns a response header field called `Operation-Location`. The `Operation-Location` value is a URL that contains the Result ID to be used in the next step.
|Response header| Result URL | |:--|:-|
The [Analyze Receipt](https://westcentralus.dev.cognitive.microsoft.com/docs/ser
## The Get Analyze Receipt Result operation
-The second step is to call the [Get Analyze Receipt Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/GetAnalyzeReceiptResult) operation. This operation takes as input the Result ID that was created by the Analyze Receipt operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+The second step is to call the [Get Analyze Receipt Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/GetAnalyzeReceiptResult) operation. This operation takes as input the Result ID that was created by the Analyze Receipt operation. It returns a JSON response that contains a **status** field with the following possible values. You call this operation iteratively until it returns with the **succeeded** value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
|Field| Type | Possible values | |:--|:-:|:-|
-|status | string | notStarted: The analysis operation has not started. |
+|status | string | notStarted: The operation hasn't started. |
| | | running: The analysis operation is in progress. | | | | failed: The analysis operation has failed. | | | | succeeded: The analysis operation has succeeded. |
-When the **status** field has the **succeeded** value, the JSON response will include the receipt understanding and text recognition results. The receipt understanding result is organized as a dictionary of named field values, where each value contains the extracted text, normalized value, bounding box, confidence and corresponding word elements. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
+When the **status** field has the **succeeded** value, the JSON response will include the receipt understanding and text recognition results. The receipt understanding result is organized as a dictionary of named field values. Each value contains the extracted text, normalized value, bounding box, confidence and corresponding word elements. The text recognition result is organized as a hierarchy of lines and words, with text, bounding box and confidence information.
![sample receipt results](./media/contoso-receipt-2-information.png)
See the following example of a successful JSON response:
} ``` - ## Customer scenarios
-The data extracted with the Receipt API can be used to perform a variety of tasks. The following are a few examples of what our customers have accomplished with the Receipt API.
+The data extracted with the Receipt API can be used to perform a variety of tasks. Below are a few examples of what customers have accomplished with the Receipt API.
### Business expense reporting Often filing business expenses involves spending time manually entering data from images of receipts. With the Receipt API, you can use the extracted fields to partially automate this process and analyze your receipts quickly.
-Because the Receipt API has a simple JSON output, you can use the extracted field values in multiple ways. Integrate with internal expense applications to pre-populate expense reports. For more on this scenario, read about how Acumatica is utilizing Receipt API to [make expense reporting a less painful process](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure).
+The Receipt API is a simple JSON output allowing you to use the extracted field values in multiple ways. Integrate with internal expense applications to pre-populate expense reports. For more on this scenario, read about how Acumatica is utilizing Receipt API to [make expense reporting a less painful process](https://customers.microsoft.com/story/762684-acumatica-partner-professional-services-azure).
-### Auditing and accounting
+### Auditing and accounting
The Receipt API output can also be used to perform analysis on a large number of expenses at various points in the expense reporting and reimbursement process. You can process receipts to triage them for manual audit or quick approvals.
The Receipt API also powers the [AI Builder Receipt Processing feature](/ai-buil
## Next steps -- Complete a [Form Recognizer quickstart](quickstarts/client-library.md) to get started writing a receipt processing app with Form Recognizer in the development language of your choice.
+ .Get started writing a receipt processing app with Form Recognizer in the development language of your choice.
+
+> [!div class="nextstepaction"]
+> [Complete a Form Recognizer quickstart](quickstarts/client-library.md)
## See also
-* [What is Form Recognizer?](./overview.md)
-* [REST API reference docs](./index.yml)
+* [What is Form Recognizer?](overview.md)
+* [Form Recognizer API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeReceiptAsync)
+>
cognitive-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/disaster-recovery.md
Title: Disaster recovery guidance for Azure Form Recognizer description: Learn how to use the copy model API to back up your Form Recognizer resources.-+ Previously updated : 05/27/2020- Last updated : 03/15/2021+ # Back up and recover your Form Recognizer models
The process for copying a custom model consists of the following steps:
1. Next you send the copy request to the source resource&mdash;the resource that contains the model to be copied. You'll get back a URL that you can query to track the progress of the operation. 1. You'll use your source resource credentials to query the progress URL until the operation is a success. You can also query the new model ID in the target resource to get the status of the new model.
-> [!CAUTION]
-> The Copy API currently does not support model IDs for [composed custom models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/Compose). Model compose is a preview feature in v2.1-preview.2 preview.
- ## Generate Copy authorization request The following HTTP request gets copy authorization from your target resource. You'll need to enter the endpoint and key of your target resource as headers.
curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v
## Next steps In this guide, you learned how to use the Copy API to back up your custom models to a secondary Form Recognizer resource. Next, explore the API reference docs to see what else you can do with Form Recognizer.
-* [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm)
+* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm)
cognitive-services Form Recognizer Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/form-recognizer-container-howto.md
formrecognizer_config =
### Form Recognizer
-The container provides REST endpoint APIs, which you can find on the [Form Recognizer API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api/operations/AnalyzeWithCustomModel) page.
+The container provides REST endpoint APIs, which you can find on the [Form Recognizer API]https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) reference page.
[!INCLUDE [Validate container is running - Container's API documentation](../../../includes/cognitive-services-containers-api-documentation.md)]
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/language-support.md
Title: Language support - Form Recognizer (Preview)
description: Learn more about the human languages that are available with Form Recognizer. -+ Previously updated : 11/23/2020- Last updated : 03/15/2021+ # Language support for Form Recognizer This table lists the human languages supported by the Form Recognizer service. -
-|Language| Language code | Form Recognizer v2.0 | Form Recognizer v2.1 preview|
+|Language| Language code | Form Recognizer v2.1 preview.3 |Form Recognizer v2.0 |
|:--|:-:|:--:|::|
-|Chinese (Simplified) | `zh-Hans`| | Γ£ö |
-|Dutch | `nl` | | Γ£ö |
-|English (printed & handwritten) | `en` | Γ£ö | Γ£ö|
-|French | `fr` | | Γ£ö |
-|German | `de` | | Γ£ö |
-|Italian | `it` | | Γ£ö |
-|Japanese | `ja` | | Γ£ö|
-|Portuguese | `pt` | | Γ£ö |
-|Spanish | `es` | | Γ£ö |
+|Afrikaans|`af`| Γ£ö | |
+|Albanian |`sq`| Γ£ö | |
+|Asturian |`ast`| Γ£ö | |
+|Basque |`eu`| Γ£ö | |
+|Bislama |`bi`| Γ£ö | |
+|Breton |`br`| Γ£ö | |
+|Catalan |`ca`| Γ£ö | |
+|Cebuano |`ceb`| Γ£ö | |
+|Chamorro |`ch`| Γ£ö | |
+|Chinese (Simplified) | `zh-Hans`|Γ£ö | Γ£ö |
+|Chinese (Traditional) | `zh-Hant`| Γ£ö | |
+|Cornish |`kw`| Γ£ö | |
+|Corsican |`co`| Γ£ö | |
+|Crimean Tatar (Latin) |`crh`| Γ£ö | |
+|Czech | `cs` | Γ£ö | |
+|Danish | `da` | Γ£ö | |
+|Dutch | `nl` |Γ£ö | Γ£ö |
+|English (printed and handwritten) | `en` |Γ£ö | Γ£ö |
+|Estonian |`crh`| Γ£ö | |
+|Fijian |`fj`| Γ£ö | |
+|Filipino |`fil`| Γ£ö | |
+|Finnish | `fi` | Γ£ö | |
+|French | `fr` |Γ£ö | Γ£ö |
+|Friulian | `fur` | Γ£ö | |
+|Galician | `gl` | Γ£ö | |
+|German | `de` |Γ£ö | Γ£ö |
+|Gilbertese | `gil` | Γ£ö | |
+|Greenlandic | `kl` | Γ£ö | |
+|Haitian Creole | `ht` | Γ£ö | |
+|Hani | `hni` | Γ£ö | |
+|Hmong Daw (Latin) | `mww` | Γ£ö | |
+|Hungarian | `hu` | Γ£ö | |
+|Indonesian | `id` | Γ£ö | |
+|Interlingua | `ia` | Γ£ö | |
+|Inuktitut (Latin) | `iu` | Γ£ö | |
+|Irish | `ga` | Γ£ö | |
+|Italian | `it` |Γ£ö | Γ£ö |
+|Japanese | `ja` |Γ£ö | Γ£ö |
+|Javanese | `jv` | Γ£ö | |
+|KΓÇÖicheΓÇÖ | `quc` | Γ£ö | |
+|Kabuverdianu | `kea` | Γ£ö | |
+|Kachin (Latin) | `kac` | Γ£ö | |
+|Kara-Kalpak | `kaa` | Γ£ö | |
+|Kashubian | `csb` | Γ£ö | |
+|Khasi | `kha` | Γ£ö | |
+|Korean | `ko` | Γ£ö | |
+|Kurdish (latin) | `kur` | Γ£ö | |
+|Luxembourgish | `lb` | Γ£ö | |
+|Malay (Latin) | `ms` | Γ£ö | |
+|Manx | `gv` | Γ£ö | |
+|Neapolitan | `nap` | Γ£ö | |
+|Norwegian | `no` | Γ£ö | |
+|Occitan | `oc` | Γ£ö | |
+|Polish | `pl` | Γ£ö | |
+|Portuguese | `pt` |Γ£ö | Γ£ö |
+|Romansh | `rm` | Γ£ö | |
+|Scots | `sco` | Γ£ö | |
+|Scottish Gaelic | `gd` | Γ£ö | |
+|Slovenian | `slv` | Γ£ö | |
+|Spanish | `es` |Γ£ö | Γ£ö |
+|Swahili (Latin) | `sw` | Γ£ö | |
+|Swedish | `sv` | Γ£ö ||
+|Tatar (Latin) | `tat` | Γ£ö | |
+|Tetum | `tet` | Γ£ö | |
+|Turkish | `tr` | Γ£ö | |
+|Upper Sorbian | `hsb` | Γ£ö | |
+|Uzbek (Latin) | `uz` | Γ£ö | |
+|Volap├╝k | `vo` | Γ£ö | |
+|Walser | `wae` | Γ£ö | |
+|Western Frisian | `fy` | Γ£ö | |
+|Yucatec Maya | `yua` | Γ£ö | |
+|Zhuang | `za` | Γ£ö | |
+|Zulu | `zu` | Γ£ö | |
+||||
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/overview.md
Title: What is Form Recognizer? description: The Azure Form Recognizer service allows you to identify and extract key/value pairs and table data from your form documents, as well as extract major information from sales receipts and business cards.-+ - Previously updated : 11/23/2020- Last updated : 03/15/2021+ keywords: automated data processing, document processing, automated data entry, forms processing #Customer intent: As a developer of form-processing software, I want to learn what the Form Recognizer service does so I can determine if I should use it.
keywords: automated data processing, document processing, automated data entry,
Azure Form Recognizer is a cognitive service that lets you build automated data processing software using machine learning technology. Identify and extract text, key/value pairs, selection marks, tables, and structure from your documents&mdash;the service outputs structured data that includes the relationships in the original file, bounding boxes, confidence and more. You quickly get accurate results that are tailored to your specific content without heavy manual intervention or extensive data science expertise. Use Form Recognizer to automate data entry in your applications and enrich your documents search capabilities.
-Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts and business cards, and the layout model. You can call Form Recognizer models by using a REST API or client library SDKs to reduce complexity and integrate it into your workflow or application.
+Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts, IDs and business cards, and the layout model. You can call Form Recognizer models by using a REST API or client library SDKs to reduce complexity and integrate it into your workflow or application.
Form Recognizer is composed of the following * **[Layout API](#layout-api)** - Extract text, selection marks, and tables structures, along with their bounding box coordinates, from documents. * **[Custom models](#custom-models)** - Extract text, key/value pairs, selection marks, and table data from forms. These models are trained with your own data, so they're tailored to your forms.
-* **[Prebuilt models](#prebuilt-models)** - Extract data from unique form types using prebuilt models. Currently available are the following prebuilt models
+
+* **[Prebuilt models](#prebuilt-models)** - Extract data from unique document types using prebuilt models. Currently available are the following prebuilt models
+ * [Invoices](./concept-invoices.md) * [Sales receipts](./concept-receipts.md) * [Business cards](./concept-business-cards.md)
+ * [Identification (ID) cards](./concept-identification-cards.md)
## Try it out To try out the Form Recognizer Service, go to the online Sample UI Tool: <!-- markdownlint-disable MD025 -->
-# [v2.1 preview](#tab/v2-1)
+<!-- markdownlint-disable MD024 -->
+
+### [v2.1 preview](#tab/v2-1)
> [!div class="nextstepaction"] > [Try Form Recognizer](https://fott-preview.azurewebsites.net/)
-# [v2.0](#tab/v2-0)
+### [v2.0](#tab/v2-0)
> [!div class="nextstepaction"] > [Try Form Recognizer](https://fott.azurewebsites.net/)
You have the following options when you train custom models: training with label
### Train without labels
-By default, Form Recognizer uses unsupervised learning to understand the layout and relationships between fields and entries in your forms. When you submit your input forms, the algorithm clusters the forms by type, discovers what keys and tables are present, and associates values to keys and entries to tables. This doesn't require manual data labeling or intensive coding and maintenance, and we recommend you try this method first.
+Form Recognizer uses unsupervised learning to understand the layout and relationships between fields and entries in your forms. When you submit your input forms, the algorithm clusters the forms by type, discovers what keys and tables are present, and associates values to keys and entries to tables. Training without labels doesn't require manual data labeling or intensive coding and maintenance, and we recommend you try this method first.
See [Build a training data set](./build-training-data-set.md) for tips on how to collect your training documents. ### Train with labels
-When you train with labeled data, the model does supervised learning to extract values of interest, using the labeled forms you provide. This results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [Layout API](#layout-api) to learn the expected sizes and positions of printed and handwritten text elements. Then it uses user-specified labels to learn the key/value associations in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy.
+Form Recognizer uses the [Layout API](#layout-api) to learn the expected sizes and positions of printed and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
[Get started with Train with labels](./quickstarts/label-tool.md) - > [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] - ## Prebuilt models Form Recognizer also includes Prebuilt models for automated data processing of unique form types. ### Prebuilt Invoice model
-The Prebuilt Invoice model extracts data from invoices in a variety of formats and returns structured data. This model extracts key information such as the invoice ID, customer details, vendor details, ship to, bill to, total, tax, subtotal and more. In addition, the prebuilt invoice model is trained to analyze and return all of the text and tables on the invoice. See the [Invoices](./concept-invoices.md) conceptual guide for more info.
+
+The Prebuilt Invoice model extracts data from invoices in various formats and returns structured data. This model extracts key information such as the invoice ID, customer details, vendor details, ship to, bill to, total, tax, subtotal, line items and more. In addition, the prebuilt invoice model is trained to analyze and return all of the text and tables on the invoice. See the [Invoices](./concept-invoices.md) conceptual guide for more info.
:::image type="content" source="./media/overview-invoices.jpg" alt-text="sample invoice" lightbox="./media/overview-invoices.jpg":::
The Prebuilt Receipt model is used for reading English sales receipts from Austr
:::image type="content" source="./media/overview-receipt.jpg" alt-text="sample receipt" lightbox="./media/overview-receipt.jpg":::
+### Prebuilt Identification (ID) cards model
+
+The Identification (ID) cards model enables you to extract key information from world-wide passports and US driver licenses. It extracts data such as the document ID, expiration date of birth, date of expiration, name, country, region, machine-readable zone and more. See the [Identification (ID) cards](./concept-identification-cards.md) conceptual guide for more info.
++ ### Prebuilt Business Cards model The Business Cards model enables you to extract information such as the person's name, job title, address, email, company, and phone numbers from business cards in English. See the [Business cards](./concept-business-cards.md) conceptual guide for more info. :::image type="content" source="./media/overview-business-card.jpg" alt-text="sample business card" lightbox="./media/overview-business-card.jpg"::: - ## Get started
-Use the [Sample Form Recognizer tool](https://fott.azurewebsites.net/) or follow a quickstart to get started extracting data from your forms. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-* [Client library / REST API quickstart](./quickstarts/client-library.md) (all languages, multiple scenarios)
-* Web UI quickstarts
- * [Train with labels - sample labeling tool](quickstarts/label-tool.md)
-* REST samples (GitHub)
- * Extract text, selection marks and table structure from documents
- * [Extract layout data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-layout.md)
- * Train custom models and extract form data
- * [Train without labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md)
- * [Train with labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
- * Extract data from invoices
- * [Extract invoice data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-invoices.md)
- * Extract data from sales receipts
- * [Extract receipt data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-receipts.md)
- * Extract data from business cards
- * [Extract business card data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-business-cards.md)
+Use the Sample Form Recognizer Tool to try out Layout, Pre-built models and train a custom model for your documents:
+
+### [v2.1 preview](#tab/v2-1)
+
+> [!div class="nextstepaction"]
+> [Try Form Recognizer](https://fott-preview.azurewebsites.net/)
+
+### [v2.0](#tab/v2-0)
+
+> [!div class="nextstepaction"]
+> [Try Form Recognizer](https://fott.azurewebsites.net/)
++
+Follow the [Client library / REST API quickstart](./quickstarts/client-library.md) to get started extracting data from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+You can also use the REST samples (GitHub) to get started -
+
+* Extract text, selection marks, and table structure from documents
+ * [Extract layout data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-layout.md)
+* Train custom models and extract form data
+ * [Train without labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md)
+ * [Train with labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
+* Extract data from invoices
+ * [Extract invoice data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-invoices.md)
+* Extract data from sales receipts
+ * [Extract receipt data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-receipts.md)
+* Extract data from business cards
+ * [Extract business card data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-business-cards.md)
### Review the REST APIs
You'll use the following APIs to train models and extract structured data from f
|Name |Description | |||
-| **Analyze Layout** | Analyze a document passed in as a stream to extract text, selection marks, tables and structure from the document |
+| **Analyze Layout** | Analyze a document passed in as a stream to extract text, selection marks, tables, and structure from the document |
| **Train Custom Model**| Train a new model to analyze your forms by using five forms of the same type. Set the _useLabelFile_ parameter to `true` to train with manually labeled data. | | **Analyze Form** |Analyze a form passed in as a stream to extract text, key/value pairs, and tables from the form with your custom model. |
-| **Analyze Invoice** | Analyze a invoice to extract key information, tables, and other invoice text.|
+| **Analyze Invoice** | Analyze an invoice to extract key information, tables, and other invoice text.|
| **Analyze Receipt** | Analyze a receipt document to extract key information, and other receipt text.|
+| **Analyze ID** | Analyze an ID card document to extract key information, and other identification card text.|
| **Analyze Business Card** | Analyze a business card to extract key information and text.|
-# [v2.1 preview](#tab/v2-1)
-Explore the [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
+### [v2.1 preview](#tab/v2-1)
+
+Explore the [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
-# [v2.0](#tab/v2-0)
-Explore the [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
+### [v2.0](#tab/v2-0)
+
+Explore the [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
Explore the [REST API reference documentation](https://westus2.dev.cognitive.mic
## Deploy on premises using Docker containers
-[Use Form Recognizer containers (preview)](form-recognizer-container-howto.md) to deploy API features on-premises. This Docker container enables you to bring the service closer to your data for compliance, security or other operational reasons.
+[Use Form Recognizer containers (preview)](form-recognizer-container-howto.md) to deploy API features on-premises. This Docker container enables you to bring the service closer to your data for compliance, security, or other operational reasons.
## Service availability and redundancy
Yes. The Form Recognizer service is zone-resilient by default.
No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Form Recognizer resources is available by default and managed by the service itself. - ## Data privacy and security As with all the cognitive services, developers using the Form Recognizer service should be aware of Microsoft policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more. ## Next steps
-Complete a [quickstart](quickstarts/client-library.md) to get started writing a forms processing app with Form Recognizer in the development language of your choice.
+Try our online tool and quickstart to learn more about the Form Recognizer service.
+
+* [**Form Recognizer tool**](https://fott-preview.microsoft.com/)
+* [**Client library and REST API quickstart**](quickstarts/client-library.md)
cognitive-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/client-library.md
Title: "Quickstart: Form Recognizer client library or REST API"
description: Use the Form Recognizer client library or REST API to create a forms processing app that extracts key/value pairs and table data from your custom documents. -+ Last updated 01/29/2021-+ zone_pivot_groups: programming-languages-set-formre
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/label-tool.md
Previously updated : 01/29/2021 Last updated : 03/15/2021 keywords: document processing
In this quickstart, you'll use the Form Recognizer REST API with the sample labe
To complete this quickstart, you must have: * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
+* Once you have your Azure subscription, <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
* You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart. * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. * A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account.
First, make sure all the training documents are of the same format. If you have
### Configure cross-domain resource sharing (CORS)
-Enable CORS on your storage account. Select your storage account in the Azure portal and click the **CORS** tab on the left pane. On the bottom line, fill in the following values. Then click **Save** at the top.
+Enable CORS on your storage account. Select your storage account in the Azure portal and then choose the **CORS** tab on the left pane. On the bottom line, fill in the following values. Select **Save** at the top.
* Allowed origins = * * Allowed methods = \[select all\]
Enable CORS on your storage account. Select your storage account in the Azure po
## Connect to the sample labeling tool
-The sample labeling tool connects to a source (where your original forms are) and a target (where it exports the created labels and output data).
+ The sample labeling tool connects to a source (your original uploaded forms) and a target (created labels and output data).
Connections can be set up and shared across projects. They use an extensible provider model, so you can easily add new source/target providers.
-To create a new connection, click the **New Connections** (plug) icon, in the left navigation bar.
+To create a new connection, select the **New Connections** (plug) icon, in the left navigation bar.
Fill in the fields with the following values:
Fill in the fields with the following values:
:::image type="content" source="../media/label-tool/connections.png" alt-text="Connection settings of sample labeling tool."::: - ## Create a new project In the sample labeling tool, projects store your configurations and settings. Create a new project and fill in the fields with the following values: * **Display Name** - the project display name
-* **Security Token** - Some project settings can include sensitive values, such as API keys or other shared secrets. Each project will generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by clicking the gear icon at the bottom of the left navigation bar.
+* **Security Token** - Some project settings can include sensitive values, such as API keys or other shared secrets. Each project will generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
* **Source Connection** - The Azure Blob Storage connection you created in the previous step that you would like to use for this project. * **Folder Path** - Optional - If your source forms are located in a folder on the blob container, specify the folder name here * **Form Recognizer Service Uri** - Your Form Recognizer endpoint URL.
When you create or open a project, the main tag editor window opens. The tag edi
* The main editor pane that allows you to apply tags. * The tags editor pane that allows users to modify, lock, reorder, and delete tags.
-### Identify text elements
+### Identify text and tables
-Click **Run OCR on all files** on the left pane to get the text layout information for each document. The labeling tool will draw bounding boxes around each text element.
+Select **Run OCR on all files** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
-It will also show which tables have been automatically extracted. Click on the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
+The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
:::image type="content" source="../media/label-tool/table-extraction.png" alt-text="Table visualization in sample labeling tool.":::
+In v2.1, if your training document does not have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
+ ### Apply labels to text Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze.
-### [v2.1 preview](#tab/v2-1)
+### [v2.0](#tab/v2-1)
-1. First, use the tags editor pane to create the tags you'd like to identify:
- * Click **+** to create a new tag.
- * Enter the tag name.
- * Press Enter to save the tag.
-1. In the main editor, click to select words from the highlighted text elements. In the _v2.1 preview.2_ API, you can also click to select _Selection Marks_ like radio buttons and checkboxes as key value pairs. Form Recognizer will identify whether the selection mark is "selected" or "unselected" as the value.
-1. Click on the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
+1. First, use the tags editor pane to create the tags you'd like to identify.
+ 1. Select **+** to create a new tag.
+ 1. Enter the tag name.
+ 1. Press Enter to save the tag.
+1. In the main editor, select words from the highlighted text elements or a region you drew in.
+1. Select the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
> [!Tip] > Keep the following tips in mind when you're labeling your forms: >
Next, you'll create tags (labels) and apply them to the text elements that you w
### [v2.0](#tab/v2-0) 1. First, use the tags editor pane to create the tags you'd like to identify.
- 1. Click **+** to create a new tag.
+ 1. Select **+** to create a new tag.
1. Enter the tag name. 1. Press Enter to save the tag.
-1. In the main editor, click to select words from the highlighted text elements.
-1. Click on the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
+1. In the main editor, select words from the highlighted text elements.
+1. Select the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane.
> [!Tip] > Keep the following tips in mind when you're labeling your forms: >
Next, you'll create tags (labels) and apply them to the text elements that you w
> * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key. > + :::image type="content" source="../media/label-tool/main-editor-2-1.png" alt-text="Main editor window of sample labeling tool.":::
Follow the steps above to label at least five of your forms.
### Specify tag value types
-Optionally, you can set the expected data type for each tag. Open the context menu to the right of a tag and select a type from the menu. This feature allows the detection algorithm to make certain assumptions that will improve the text-detection accuracy. It also ensures that the detected values will be returned in a standardized format in the final JSON output. Value type information is saved in the *fields.json* file in the same path as your label files.
+You can set the expected data type for each tag. Open the context menu to the right of a tag and select a type from the menu. This feature allows the detection algorithm to make assumptions that will improve the text-detection accuracy. It also ensures that the detected values will be returned in a standardized format in the final JSON output. Value type information is saved in the **fields.json** file in the same path as your label files.
> [!div class="mx-imgBorder"] > ![Value type selection with sample labeling tool](../media/whats-new/value-type.png)
The following value types and variations are currently supported:
> * 01Jan2020 > * 01 Jan 2020
+### Label tables (v2.1 only)
+
+At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by clicking on "Add a new table tag," specify whether the table will have a fixed number of rows or variable number of rows depending on the document, and define the schema.
++
+Once you have defined your table tag, tag the cell values.
++ ## Train a custom model
-Click the Train icon on the left pane to open the Training page. Then click the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
+Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./client-library.md?pivots=programming-language-rest-api) or [client library](./client-library.md).
-* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and training again to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
+* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
* The list of tags, and the estimated accuracy per tag.
After training finishes, examine the **Average Accuracy** value. If it's low, yo
### [v2.1 preview](#tab/v2-1)
-With Model Compose, you can compose up to 100 models to a single model ID. When you call Analyze with this composed model ID, Form Recognizer will first classify the form you submitted, matching it to the best matching model, and then return results for that model. This is useful when incoming forms may belong to one of several templates.
+With Model Compose, you can compose up to 100 models to a single model ID. When you call Analyze with the composed `modelID`, Form Recognizer will first classify the form you submitted, choose the best matching model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
-To compose models in the sample labeling tool, click on the Model Compose (merging arrow) icon on the left. On the left, select the models you wish to compose together. Models with the arrows icon are already composed models.
-Click on the "Compose" button. In the pop-up, name your new composed model and click "Compose". When the operation completes, your new composed model should appear in the list.
+To compose models in the sample labeling tool, select the Model Compose (merging arrow) icon on the left. On the left, select the models you wish to compose together. Models with the arrows icon are already composed models.
+Choose the **Compose button**. In the pop-up, name your new composed model and select **Compose**. When the operation completes, your newly composed model should appear in the list.
:::image type="content" source="../media/label-tool/model-compose.png" alt-text="Model compose UX view.":::
This feature is currently available in v2.1. preview.
## Analyze a form
-Click on the Predict (light bulb) icon on the left to test your model. Upload a form document that you haven't used in the training process. Then click the **Predict** button on the right to get key/value predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
+Select the Predict (light bulb) icon on the left to test your model. Upload a form document that you haven't used in the training process. Then choose the **Predict** button on the right to get key/value predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
> [!TIP] > You can also run the Analyze API with a REST call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md). ## Improve results
-Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value was high, but the confidence scores are low (or the results are inaccurate), you should add the file used for prediction into the training set, label it, and train again.
+Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value was high, but the confidence scores are low (or the results are inaccurate), you should add the prediction file to the training set, label it, and train again.
-The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from those used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
+The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
## Save a project and resume later
Go to your project settings page (slider icon) and take note of the security tok
### Restore project credentials
-When you want to resume your project, you first need to create a connection to the same blob storage container. Repeat the steps above to do this. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Then click Save Settings.
+When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps above. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings..
### Resume a project
-Finally, go to the main page (house icon) and click Open Cloud Project. Then select the blob storage connection, and select your project's *.fott* file. The application will load all of the project's settings because it has the security token.
+Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's **.fott** file. The application will load all of the project's settings because it has the security token.
## Next steps
cognitive-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/supervised-table-tags.md
+
+ Title: "How to use table tags to train your custom form model - Form Recognizer"
+
+description: Learn how to effectively use supervised table tag labeling.
++++++ Last updated : 03/15/2021+
+#Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way.
++
+# Use table tags to train your custom form model
+
+In this article, you'll learn how to train your custom form model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom form model.
+
+## When should I use table tags?
+
+Here are some examples of when using table tags would be appropriate:
+
+- There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.
+- There's data you wish to extract that is not presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
+
+> [!NOTE]
+> Form Recognizer automatically finds and extracts all tables in your documents whether the tables are tagged or not. Therefore, you don't have to label every table from your form with a table tag and your table tags don't have to replicate the structure of very table found in your form. Tables extracted automatically by Form Recognizer will be included in the pageResults section of the JSON output.
+
+## Create a table tag with Form OCR Test Tool (FOTT)
+<!-- markdownlint-disable MD004 -->
+* Determine whether you want a **dynamic** or **fixed-size** table tag. If the number of rows vary from document to document use a dynamic table tag. If the number of rows is consistent across your documents, use a fixed-size table tag.
+* If your table tag is dynamic, define the column names and the data type and format for each column.
+* If your table is fixed-size, define the column name, row name, data type, and format for each tag.
+
+## Label your table tag data
+
+* If your project has a table tag, you can open the labeling panel and populate the tag as you would label key-value fields.
+
+## Next steps
+
+Follow our quickstart to train and use your custom Form Recognizer model:
+
+> [!div class="nextstepaction"]
+> [Train with labels using the sample labeling tool](quickstarts/label-tool.md)
+
+## See also
+
+* [What is Form Recognizer?](overview.md)
+>
cognitive-services Tutorial Ai Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/tutorial-ai-builder.md
Title: "Tutorial: Create a form processing app with AI Builder - Form Recognizer" description: In this tutorial, you'll use AI Builder to create and train a form processing application.-+ Last updated 11/23/2020-+ # Tutorial: Create a form-processing app with AI Builder
cognitive-services Tutorial Bulk Processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/tutorial-bulk-processing.md
Title: "Tutorial: Extract form data in bulk using Azure Data Factory - Form Reco
description: Set up Azure Data Factory activities to trigger the training and running of Form Recognizer models and digitize a large backlog of documents. -+ Last updated 01/04/2021-+ # Tutorial: Extract form data in bulk by using Azure Data Factory
If you add new forms of a new type, you'll also need to upload a training datase
In this tutorial, you set up Azure Data Factory pipelines to trigger the training and running of Form Recognizer models and digitize a large backlog of files. Next, explore the Form Recognizer API to see what else you can do with it.
-* [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeBusinessCardAsync)
+* [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeBusinessCardAsync)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/whats-new.md
Title: What's new in Form Recognizer? description: Understand the latest changes to the Form Recognizer API.-+ Previously updated : 05/19/2020- Last updated : 03/15/2021+ -
+<!-- markdownlint-disable MD024 -->
# What's new in Form Recognizer? The Form Recognizer service is updated on an ongoing basis. Use this article to stay up to date with feature enhancements, fixes, and documentation updates.
+## March 2021
+
+**Form Recognizer v2.1 public preview 3 is now available.** v2.1-preview.3 has been released, including the following features:
+
+- **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses, such as name, date of birth, issue date, expiration date, and more.
+
+ [Learn more about the prebuilt ID model](concept-identification-cards.md)
+
+ :::image type="content" source="./media/id-canada-passport-example.png" alt-text="passport example" lightbox="./media/id-canada-passport-example.png":::
+
+- **Line-item extraction for prebuilt invoice model** - Prebuilt Invoice model now supports line item extraction; it now extracts full items and their parts - description, amount, quantity, product ID, date and more. With a simple API/SDK call you can extract useful data from your invoices - text, table, key-value pairs, and line items.
+
+ [Learn more about the prebuilt invoice model](concept-invoices.md)
+
+- **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model will extract line items as part of the JSON output in the documentResults section.
+
+ :::image type="content" source="./media/table-labeling.png" alt-text="Table labeling" lightbox="./media/table-labeling.png":::
+
+ In addition to labeling tables, you and now label empty values and regions; if some documents in your training set do not have values for certain fields, you can use this so that your model will know to extract values properly from analyzed documents.
+
+- **Support for 66 new languages** - Form Recognizer's Layout API and Custom Models now support 73 languages.
+
+ [Learn more about Form Recognizer's language support](language-support.md)
+
+- **Natural reading order, handwriting classification, and page selection** - With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and to-to-bottom ordering. Use the new readingOrder query parameter and set it to "natural" value for a more human-friendly reading order output. In addition, for Latin languages, Form Recognizer will classify text lines as handwritten style or not and give a confidence score.
+
+- **Prebuilt receipt model quality improvements** This update includes a number of quality improvements for the prebuilt Receipt model, especially around line item extraction.
+ ## November 2020 ### New features
-**Form Recognizer v2.1 public preview 2 is now available.** V2.1-preview.2 has been released, including the following features:
+**Form Recognizer v2.1 public preview 2 is now available.** v2.1-preview.2 has been released, including the following features:
- **New prebuilt invoice model** - The new prebuilt Invoice model enables customers to take invoices in a variety of formats and return structured data to automate the invoice processing. It combines our powerful Optical Character Recognition (OCR) capabilities with invoice understanding deep learning models to extract key information from invoices in English. It extracts the text, tables, and information such as customer, vendor, invoice ID, invoice due date, total, amount due, tax amount, ship to, bill to, and more.
The Form Recognizer service is updated on an ongoing basis. Use this article to
- **[New locales for pre-built Receipts](concept-receipts.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN - **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_. - **v2.0** includes the following update: - The [client libraries](quickstarts/client-library.md) for NET, Python, Java, and JavaScript have entered General Availability. - **New samples** are available on GitHub. + - The [Knowledge Extraction Recipes - Forms Playbook](https://github.com/microsoft/knowledge-extraction-recipes-forms) collects best practices from real Form Recognizer customer engagements and provides usable code samples, checklists, and sample pipelines used in developing these projects. - The [sample labeling tool](https://github.com/microsoft/OCR-Form-Tools) has been updated to support the new v2.1 functionality. See this [quickstart](quickstarts/label-tool.md) for getting started with the tool. - The [Intelligent Kiosk](https://github.com/microsoft/Cognitive-Samples-IntelligentKiosk/blob/master/Documentation/FormRecognizer.md) Form Recognizer sample shows how to integrate `Analyze Receipt` and `Train Custom Model` - _Train without Labels_. -- ## July 2020 ### New features-
+<!-- markdownlint-disable MD004 -->
* **v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated SDKs for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/). * **Table enhancements and Extraction enhancements** - includes accuracy improvements and table extractions enhancements, specifically, the capability to learn tables headers and structures in _custom train without labels_. * **Currency support** - Detection and extraction of global currency symbols. * **Azure Gov** - Form Recognizer is now also available in Azure Gov. * **Enhanced security features**:
- * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./encrypt-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
- * **Private endpoints** ΓÇô Enables you on a virtual network (VNet) to [securely access data over a Private Link. ](../../private-link/private-link-overview.md)
-
+ * **Bring your own key** - Form Recognizer automatically encrypts your data when persisted to the cloud to protect it and to help you to meet your organizational security and compliance commitments. By default, your subscription uses Microsoft-managed encryption keys. You can now also manage your subscription with your own encryption keys. [Customer-managed keys, also known as bring your own key (BYOK)](./form-recognizer-encryption-of-data-at-rest.md), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+ * **Private endpoints** ΓÇô Enables you on a virtual network (VNet) to [securely access data over a Private Link.](../../private-link/private-link-overview.md)
## June 2020 ### New features+ * **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature. * **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Form Recognizer client objects in the SDKs. * **SDK-specific changes** - This includes both minor feature additions and breaking changes. See the SDK changelogs for more information.
The Form Recognizer service is updated on an ongoing basis. Use this article to
## April 2020 ### New features+ * **SDK support for Form Recognizer API v2.0 Public Preview** - This month we expanded our service support to include a preview SDK for Form Recognizer v2.0 (preview) release. Use the links below to get started with your language of choice:
- * [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme)
- * [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme)
- * [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
- * [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
+ * [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme)
+ * [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme)
+ * [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
+ * [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
The new SDK supports all the features of the v2.0 REST API for Form Recognizer. For example, you can train a model with or without labels and extract text, key value pairs and tables from your forms, extract data from receipts with the pre-built receipts service and extract text and tables with the layout service from your documents. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
-
+ * **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource by calling the Copy Authorization operation against the target resource endpoint.
- * [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API
- * [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
+
+ * [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API
+ * [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
### Security improvements * Customer-Managed Keys are now available for FormRecognizer. For more information, see [Data encryption at rest for Form Recognizer](./encrypt-data-at-rest.md). * Use Managed Identities for access to Azure resources with Azure Active Directory. For more information, see [Authorize access to managed identities](../authentication.md#authorize-access-to-managed-identities).
-## March 2020
+## March 2020
### New features
Complete a [quickstart](quickstarts/client-library.md) to get started writing a
## See also
-* [What is Form Recognizer?](./overview.md)
+* [What is Form Recognizer?](./overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
* [Speech Service:Text-to-Speech](./speech-service/language-support.md#text-to-speech) * [Speech Service: Speech Translation](./speech-service/language-support.md#speech-translation)
-## Search
-
-* [Bing Custom Search](./bing-custom-search/language-support.md)
-* [Bing Image Search](./bing-image-search/language-support.md)
-* [Bing News Search](./bing-news-search/language-support.md)
-* [Bing Autosuggest](./bing-autosuggest/language-support.md)
-* [Bing Spell Check](./bing-spell-check/language-support.md)
-* [Bing Visual Search](./bing-visual-search/language-support.md)
-* [Bing Web Search](./bing-web-search/language-support.md)
- ## Decision * [Content Moderator](./content-moderator/language-support.md)
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
Title: Authenticate to Azure Communication Services description: Learn about the various ways an app or service can authenticate to Communication Services.-+ - Previously updated : 07/24/2020+ Last updated : 03/10/2021
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-flows.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/client-and-server-architecture.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Detailed Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/detailed-call-flows.md
Previously updated : 12/11/2020 Last updated : 03/10/2021
communication-services Event Handling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/event-handling.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Identity Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/identity-model.md
Previously updated : 10/26/2020 Last updated : 03/10/2021
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
Previously updated : 10/03/2020 Last updated : 03/10/2021
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/logging-and-diagnostics.md
Previously updated : 10/15/2020 Last updated : 03/10/2021
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/metrics.md
Previously updated : 05/19/2020 Last updated : 03/10/2021
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Previously updated : 09/29/2020 Last updated : 03/10/2021
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Previously updated : 10/03/2020 Last updated : 03/10/2021
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/reference.md
Previously updated : 09/29/2020 Last updated : 03/10/2021
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Previously updated : 03/18/2020 Last updated : 03/10/2021
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
Previously updated : 10/10/2020 Last updated : 03/10/2021
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/plan-solution.md
Previously updated : 10/05/2020 Last updated : 03/10/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sdk-features.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Sip Interface Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sip-interface-infrastructure.md
Previously updated : 02/09/2021 Last updated : 03/10/2021
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
Previously updated : 02/09/2021 Last updated : 03/10/2021
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
Previously updated : 10/23/2020 Last updated : 03/10/2021
communication-services Ui Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-features.md
description: Learn about UI Framework capabilities Previously updated : 11/16/2020 Last updated : 03/10/2021
communication-services Ui Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-overview.md
description: Learn about Azure Communication Services UI Framework Previously updated : 11/16/2020 Last updated : 03/10/2021
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Previously updated : 03/04/2021 Last updated : 03/10/2021
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
Previously updated : 07/20/2020 Last updated : 03/10/2021
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/access-tokens.md
Previously updated : 08/20/2020 Last updated : 03/10/2021 zone_pivot_groups: acs-js-csharp-java-python
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/get-started.md
Previously updated : 09/30/2020 Last updated : 03/10/2021 zone_pivot_groups: acs-js-csharp-java-python-swift-android
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Chat client library Previously updated : 12/08/2020 Last updated : 03/10/2021
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
Previously updated : 09/30/2020 Last updated : 03/10/2021 zone_pivot_groups: acs-plat-azp-net
communication-services Managed Identity From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity-from-cli.md
Previously updated : 02/25/2021 Last updated : 03/10/2021
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity.md
Previously updated : 2/24/2021 Last updated : 03/10/2021 zone_pivot_groups: acs-js-csharp-java-python
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
Previously updated : 10/05/2020 Last updated : 03/10/2021
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/logic-app.md
Previously updated : 10/06/2020 Last updated : 03/10/2021
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
Previously updated : 09/30/2020 Last updated : 03/10/2021
communication-services Create Your Own Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/create-your-own-components.md
description: In this quickstart, you'll learn how to build a custom component compatible with the UI Framework Previously updated : 11/16/2020 Last updated : 03/10/2021
communication-services Get Started With Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-components.md
description: In this quickstart, you'll learn how to get started with UI Framework base components Previously updated : 11/16/2020 Last updated : 03/10/2021
communication-services Get Started With Composites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/ui-framework/get-started-with-composites.md
description: In this quickstart, you'll learn how to get started with UI Framework Composite Components Previously updated : 11/16/2020 Last updated : 03/10/2021
communication-services Calling Client Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/calling-client-samples.md
Previously updated : 03/18/2020 Last updated : 03/10/2021 zone_pivot_groups: acs-plat-web-ios-android
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK. Previously updated : 10/10/2020 Last updated : 03/10/2021
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
description: In this quickstart, you'll learn how to add video calling capabilities to your app using Azure Communication Services. Previously updated : 07/24/2020 Last updated : 03/10/2021
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
Title: Quickstart - Add voice calling to your app description: In this quickstart, you'll learn how to add calling capabilities to your app using Azure Communication Services.-- Previously updated : 07/24/2020++ Last updated : 03/10/2021
communication-services Pstn Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/pstn-call.md
description: In this quickstart, you'll learn how to add PSTN calling capabilities to your app using Azure Communication Services. Previously updated : 09/11/2020 Last updated : 03/10/2021 zone_pivot_groups: acs-plat-web-ios-android
communication-services Calling Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/calling-hero-sample.md
Previously updated : 07/20/2020 Last updated : 03/10/2021 zone_pivot_groups: acs-web-ios
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/chat-hero-sample.md
Previously updated : 07/20/2020 Last updated : 03/10/2021
For more information, see the following articles:
## Additional reading -- [Azure Communication GitHub](https://github.com/Azure/communication) - Find more examples and information on the official GitHub page
+- [Samples](./overview.md) - Find more samples and examples on our samples overview page.
- [Redux](https://redux.js.org/) - Client-side state management - [FluentUI](https://aka.ms/fluent-ui) - Microsoft powered UI library - [React](https://reactjs.org/) - Library for building user interfaces
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/overview.md
+
+ Title: Samples overview page
+
+description: Overview of available sample projects for Azure Communication Services.
+++++ Last updated : 03/12/2021+++
+# Samples
+
+Azure Communication Services has many samples available, which you can use to test out ACS services and features before creating your own application or use case.
+
+## Application samples
+
+| Sample Name | Description | Languages/Platforms Available |
+| : | : | : |
+| [Group Calling Hero Sample](./calling-hero-sample.md) | Provides a sample of creating a group calling application. | Web, iOS |
+| [Web Calling Sample](./web-calling-sample.md) | A step by step walk-through of ACS Calling features within the Web. | Web |
+| [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | Web & C# .NET |
+| [Contoso Medical App](https://github.com/Azure-Samples/communication-services-contoso-med-app) | Sample app demonstrating a patient-doctor flow. | Web & Node.js |
+| [Contoso Retail App](https://github.com/Azure-Samples/communication-services-contoso-retail-app) | Sample app demonstrating a retail support flow. | ASP.NET, .NET Core, JavaScript/Web |
+| [WPF Calling Sample](https://github.com/Azure-Samples/communication-services-web-calling-wpf-sample) | Sample app for Windows demonstrating calling functionality | WPF / Node.js |
+
+## Quickstart samples
+Access code samples for quickstarts found on our documentation.
+ - [JavaScript](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/)
+ - [.NET](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/)
+ - [iOS](https://github.com/Azure-Samples/communication-services-ios-quickstarts/)
+ - [Android](https://github.com/Azure-Samples/communication-services-android-quickstarts/)
+ - [Python](https://github.com/Azure-Samples/communication-services-python-quickstarts/)
++
+## Next Steps
+
+ - [Create a Communication Services resource](../quickstarts/create-communication-resource.md)
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
Previously updated : 10/15/2020 Last updated : 03/10/2021
For more information, see the following articles:
## Additional reading -- [Azure Communication GitHub](https://github.com/Azure/communication) - Find more examples and information on the official GitHub page
+- [Samples](./overview.md) - Find more samples and examples on our samples overview page.
- [Redux](https://redux.js.org/) - Client-side state management - [FluentUI](https://aka.ms/fluent-ui) - Microsoft powered UI library - [React](https://reactjs.org/) - Library for building user interfaces
communication-services Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/support.md
Previously updated : 02/23/2021 Last updated : 03/10/2021
communication-services Building App Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/building-app-start.md
description: Learn how to create a baseline web application that supports Azure
Previously updated : 01/03/2012 Last updated : 03/10/2021
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/hmac-header-tutorial.md
Previously updated : 01/15/2021 Last updated : 03/10/2021
communication-services Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/postman-tutorial.md
description: Learn how to sign and makes requests for ACS with Postman to send a
Previously updated : 03/08/2021 Last updated : 03/10/2021
communication-services Trusted Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/trusted-service-tutorial.md
Previously updated : 07/28/2020 Last updated : 03/10/2021
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-mq.md
Previously updated : 05/14/2020 Last updated : 03/10/2021 tags: connectors # Connect to an IBM MQ server from Azure Logic Apps
-The IBM MQ connector sends and retrieves messages stored in an IBM MQ server on premises or in Azure. This connector includes a Microsoft MQ client that communicates with a remote IBM MQ server across a TCP/IP network. This article provides a starter guide to use the MQ connector. You can start by browsing a single message on a queue and then try other actions.
+The MQ connector sends and retrieves messages stored in an MQ server on premises or in Azure. This connector includes a Microsoft MQ client that communicates with a remote IBM MQ server across a TCP/IP network. This article provides a starter guide to use the MQ connector. You can start by browsing a single message on a queue and then try other actions.
-The IBM MQ connector includes these actions but provides no triggers:
+The MQ connector includes these actions but provides no triggers:
-- Browse a single message without deleting the message from the IBM MQ server.-- Browse a batch of messages without deleting the messages from the IBM MQ server.-- Receive a single message and delete the message from the IBM MQ server.-- Receive a batch of messages and delete the messages from the IBM MQ server.-- Send a single message to the IBM MQ server.
+- Browse a single message without deleting the message from the MQ server.
+- Browse a batch of messages without deleting the messages from the MQ server.
+- Receive a single message and delete the message from the MQ server.
+- Receive a batch of messages and delete the messages from the MQ server.
+- Send a single message to the MQ server.
Here are the officially supported IBM WebSphere MQ versions:
Here are the officially supported IBM WebSphere MQ versions:
## Prerequisites
-* If you're using an on-premises MQ server, [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network. The server where the on-premises data gateway is installed must also have .NET Framework 4.6 installed for the MQ connector to work.
+* If you use an on-premises MQ server, you need to [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network.
- After you finish installing the gateway, you must also create a resource in Azure for the on-premises data gateway. For more information, see [Set up the data gateway connection](../logic-apps/logic-apps-gateway-connection.md).
+ > [!NOTE]
+ > If your MQ server is publicly available or available within Azure, you don't have to use the data gateway.
- If your MQ server is publicly available or available within Azure, you don't have to use the data gateway.
+ * For the MQ connector to work, the server where you install the on-premises data gateway also needs to have .NET Framework 4.6 installed.
+
+ * After you install the on-premises data gateway, you also need to [create an Azure gateway resource for the on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md) that the MQ connector uses to access your on-premises MQ server.
-* The logic app where you want to add the MQ action. This logic app must use the same location as your on-premises data gateway connection and must already have a trigger that starts your workflow.
+* The logic app where you want to use the MQ connector. The MQ connector doesn't have any triggers, so you must add a trigger to your logic app first. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md). If you're new to logic apps, try this [quickstart to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
- The MQ connector doesn't have any triggers, so you must add a trigger to your logic app first. For example, you can use the Recurrence trigger. If you're new to logic apps, try this [quickstart to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+## Limitations
+
+The MQ connector doesn't support or use the message's **Format** field and doesn't perform any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
<a name="create-connection"></a>
If you don't already have an MQ connection when you add an MQ action, you're pro
* For **Server**, you can enter the MQ server name, or enter the IP address followed by a colon and the port number.
- * To use Secure Sockets Layer (SSL), select **Enable SSL?**.
+ * To use Transport Layer Security (TLS) or Secure Sockets Layer (SSL), select **Enable SSL?**.
The MQ connector currently supports only server authentication, not client authentication. For more information, see [Connection and authentication problems](#connection-problems). 1. In the **gateway** section, follow these steps:
- 1. From the **Subscription** list, select the Azure subscription associated with your Azure gateway resource.
+ 1. From the **Subscription** list, select the Azure subscription that's associated with your Azure gateway resource.
1. From the **Connection Gateway** list, select the Azure gateway resource that you want to use.
The **Receive messages** action has the same inputs and outputs as the **Browse
## Connector reference
-For technical details about actions and limits, which are described by the connector's Swagger description,
-review the connector's [reference page](/connectors/mq/).
+For technical details, such as actions and limits, which are described in the connector's Swagger file, review the [connector's reference page](/connectors/mq/).
## Next steps
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spring-v3.md
ms.devlang: java Previously updated : 02/28/2021- Last updated : 03/15/2021+
The Spring Data Azure Cosmos DB version 3 for Core (SQL) allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
-Spring Data Azure Cosmos DB has a dependency on the Spring Data framework. The Azure Cosmos DB SDK team releases Maven artifacts for Spring Data versions 2.2 and 2.3.
+> [!IMPORTANT]
+> Spring Data Azure Cosmos DB has a dependency on the Spring Data framework.
+>
+> azure-spring-data-cosmos versions from 3.0.0 to 3.4.0 support Spring Data versions 2.2 and 2.3.
+>
+> azure-spring-data-cosmos versions 3.5.0 and above support Spring Data versions 2.4.3 and above.
+>
The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model and framework for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
cosmos-db Sql Query Pagination https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-pagination.md
Previously updated : 07/29/2020 Last updated : 03/15/2021 # Pagination in Azure Cosmos DB
cost-management-billing Find Reservation Purchaser From Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/find-reservation-purchaser-from-logs.md
+
+ Title: Find a reservation purchaser from Azure Monitor logs
+description: This article helps find a reservation purchaser with information from Azure Monitor logs.
+++++ Last updated : 03/13/2021+++
+# Find a reservation purchaser from Azure logs
+
+This article helps find a reservation purchaser with information from your directory logs. The directory logs from Azure Monitor shows the email IDs of users that made reservation purchases.
+
+## Find the purchaser
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Monitor** > **Activity Log** > **Activity**.
+ :::image type="content" source="./media/find-reservation-purchaser-from-logs/activity-log-activity.png" alt-text="Screenshot showing navigation to Activity log - Activity." lightbox="./media/find-reservation-purchaser-from-logs/activity-log-activity.png" :::
+1. Select **Directory Activity**. If you see a message stating *You need permission to view directory-level logs*, select the [link](../../role-based-access-control/elevate-access-global-admin.md) to learn how to get permissions.
+ :::image type="content" source="./media/find-reservation-purchaser-from-logs/directory-activity-no-permission.png" alt-text="Screenshot showing Directory Activity without permission to view the log." lightbox="./media/find-reservation-purchaser-from-logs/directory-activity-no-permission.png" :::
+1. Once you have permission, filter **Tenant Resource Provider** with **Microsoft.Capacity**. You should see all reservation-related events for the selected time span. If needed, change the time span.
+ :::image type="content" source="./media/find-reservation-purchaser-from-logs/user-that-purchased-reservation.png" alt-text="Screenshot showing the user that purchased the reservation." lightbox="./media/find-reservation-purchaser-from-logs/user-that-purchased-reservation.png" :::
+ If necessary, you might need to **Edit columns** to select **Event initiated by**.
+ The user who made the reservation purchase is shown under **Event initiated by**.
+
+## Next steps
+
+- If needed, billing administrators can [take ownership of a reservation](view-reservations.md#how-billing-administrators-view-or-manage-reservations).
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
Previously updated : 03/04/2021 Last updated : 03/15/2021 # Global parameters in Azure Data Factory
There are two ways to integrate global parameters in your continuous integration
For most use cases, it is recommended to include global parameters in the ARM template. This will integrate natively with the solution outlined in [the CI/CD doc](continuous-integration-deployment.md). Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub. > [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode.
+> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode.
+
+> [!WARNING]
+>You can not use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
![Include in ARM template](media/author-global-parameters/include-arm-template.png)
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 09/11/2020 Last updated : 03/15/2021 # Troubleshoot mapping data flows in Azure Data Factory
This article explores common troubleshooting methods for mapping data flows in A
- **Cause**: Undetermined. - **Recommendation**: Check parameter value assignment in the pipeline. A parameter expression might contain invalid characters.
-### Error code: DF-Excel-InvalidConfiguration
-- **Message**: Excel sheet name or index is required.-- **Cause**: Undetermined.-- **Recommendation**: Check the parameter value. Specify the worksheet name or index for reading Excel data.--- **Message**: Excel sheet name and index cannot exist at the same time.-- **Cause**: Undetermined.-- **Recommendation**: Check the parameter value. Specify the worksheet name or index for reading Excel data.--- **Message**: Invalid range is provided.-- **Cause**: Undetermined.-- **Recommendation**: Check the parameter value. Specify a valid range by reference. For more information, see [Excel properties](./format-excel.md#dataset-properties).--- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported-- **Cause**: Undetermined.-- **Recommendation**: Make sure the Excel file extension is either .xlsx or .xls.--
- ### Error code: DF-Excel-InvalidData
-- **Message**: Excel worksheet does not exist.-- **Cause**: Undetermined.-- **Recommendation**: Check the parameter value. Specify a valid worksheet name or index for reading Excel data.--- **Message**: Reading excel files with different schema is not supported now.-- **Cause**: Undetermined.-- **Recommendation**: Use a supported Excel file.--- **Message**: Data type is not supported.-- **Cause**: Undetermined.-- **Recommendation**: Use supported Excel file data types. ### Error code: 4502 - **Message**: There are substantial concurrent MappingDataflow executions that are causing failures due to throttling under Integration Runtime.
This article explores common troubleshooting methods for mapping data flows in A
- **Cause**: Data flow doesn't support linked services on self-hosted integration runtimes. - **Recommendation**: Configure data flow to run on a Managed Virtual Network integration runtime.
+### Error code: DF-Xml-InvalidValidationMode
+- **Message**: Invalid xml validation mode is provided.
+- **Recommendation**: Check the parameter value and specify the right validation mode.
+
+### Error code: DF-Xml-InvalidDataField
+- **Message**: The field for corrupt records must be string type and nullable.
+- **Recommendation**: Make sure that the column `\"_corrupt_record\"` in the source project has a string data type.
+
+### Error code: DF-Xml-MalformedFile
+- **Message**: Malformed xml in 'FailFastMode'.
+- **Recommendation**: Update the content of the XML file to the right format.
+
+### Error code: DF-Xml-InvalidDataType
+- **Message**: XML Element has sub elements or attributes and it can't be converted.
+
+### Error code: DF-Xml-InvalidReferenceResource
+- **Message**: Reference resource in the xml data file cannot be resolved.
+- **Recommendation**: You should check the reference resource in the XML data file.
+
+### Error code: DF-Xml-InvalidSchema
+- **Message**: Schema validation failed.
+
+### Error code: DF-Xml-UnsupportedExternalReferenceResource
+- **Message**: External reference resource in xml data file is not supported.
+- **Recommendation**: Update the XML file content when the external reference resource is not supported now.
+
+### Error code: DF-GEN2-InvalidAccountConfiguration
+- **Message**: Either one of account key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken should be specified.
+- **Recommendation**: Configure the right account in the related GEN2 linked service.
+
+### Error code: DF-GEN2-InvalidAuthConfiguration
+- **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified.
+- **Recommendation**: Choose the right auth type in the related GEN2 linked service.
+
+### Error code: DF-GEN2-InvalidServicePrincipalCredentialType
+- **Message**: ServicePrincipalCredentialType is invalid.
+
+### Error code: DF-GEN2-InvalidDataType
+- **Message**: Cloud type is invalid.
+
+### Error code: DF-Blob-InvalidAccountConfiguration
+- **Message**: Either one of account key or sas_token should be specified.
+
+### Error code: DF-Blob-InvalidAuthConfiguration
+- **Message**: Only one of the two auth methods (Key, SAS) can be specified.
+
+### Error code: DF-Blob-InvalidDataType
+- **Message**: Cloud type is invalid.
+
+### Error code: DF-Cosmos-PartitionKeyMissed
+- **Message**: Partition key path should be specified for update and delete operations.
+- **Recommendation**: Use the providing partition key in Cosmos sink settings.
+
+### Error code: DF-Cosmos-InvalidPartitionKey
+- **Message**: Partition key path cannot be empty for update and delete operations.
+- **Recommendation**: Use the providing partition key in Cosmos sink settings.
+
+### Error code: DF-Cosmos-IdPropertyMissed
+- **Message**: 'id' property should be mapped for delete and update operations.
+- **Recommendation**: Make sure that the input data has an `id` column in Cosmos sink settings. If no, use **select or derive transformation** to generate this column before sink.
+
+### Error code: DF-Cosmos-InvalidPartitionKeyContent
+- **Message**: partition key should start with /.
+- **Recommendation**: Make the partition key start with `/` in Cosmos sink settings, for example: `/movieId`.
+
+### Error code: DF-Cosmos-InvalidPartitionKey
+- **Message**: partitionKey not mapped in sink for delete and update operations.
+- **Recommendation**: In Cosmos sink settings, use the partition key that is same as your container's partition key.
+
+### Error code: DF-Cosmos-InvalidConnectionMode
+- **Message**: Invalid connectionMode.
+- **Recommendation**: Confirm that the supported mode is **Gateway** and **DirectHttps** in Cosmos settings.
+
+### Error code: DF-Cosmos-InvalidAccountConfiguration
+- **Message**: Either accountName or accountEndpoint should be specified.
+
+### Error code: DF-Github-WriteNotSupported
+- **Message**: Github store does not allow writes.
+
+### Error code: DF-PGSQL-InvalidCredential
+- **Message**: User/password should be specified.
+- **Recommendation**: Make sure you have right credential settings in the related postgresql linked service.
+
+### Error code: DF-Snowflake-InvalidStageConfiguration
+- **Message**: Only blob storage type can be used as stage in snowflake read/write operation.
+
+### Error code: DF-Snowflake-InvalidStageConfiguration
+- **Message**: Snowflake stage properties should be specified with azure blob + sas authentication.
+
+### Error code: DF-Snowflake-InvalidDataType
+- **Message**: The spark type is not supported in snowflake.
+- **Recommendation**: Use the **derive transformation** to change the related column of input data into the string type before snowflake sink.
+
+### Error code: DF-Hive-InvalidBlobStagingConfiguration
+- **Message**: Blob storage staging properties should be specified.
+
+### Error code: DF-Hive-InvalidGen2StagingConfiguration
+- **Message**: ADLS Gen2 storage staging only support service principal key credential.
+- **Recommendation**: Confirm that you apply the service principal key credential in the ADLS Gen2 linked service that is used as staging.
+
+### Error code: DF-Hive-InvalidGen2StagingConfiguration
+- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnKey or miServiceUri/miServiceToken is required.
+- **Recommendation**: Apply the right credential that is used as staging in the hive in the related ADLS Gen2 linked service.
+
+### Error code: DF-Hive-InvalidDataType
+- **Message**: Unsupported Column(s).
+- **Recommendation**: Update the column of input data to match the data type supported by the hive.
+
+### Error code: DF-Hive-InvalidStorageType
+- **Message**: Storage type can either be blob or gen2.
+
+### Error code: DF-Delimited-InvalidConfiguration
+- **Message**: Either one of empty lines or custom header should be specified.
+- **Recommendation**: Specify empty lines or custom headers in CSV settings.
+
+### Error code: DF-Delimited-ColumnDelimiterMissed
+- **Message**: Column delimiter is required for parse.
+- **Recommendation**: Confirm you have the column delimiter in your CSV settings.
+
+### Error code: DF-MSSQL-InvalidCredential
+- **Message**: Either one of user/pwd or tenant/spnId/spnKey or miServiceUri/miServiceToken should be specified.
+- **Recommendation**: Apply right credentials in the related MSSQL linked service.
+
+### Error code: DF-MSSQL-InvalidDataType
+- **Message**: Unsupported field(s).
+- **Recommendation**: Modify the input data column to match the data type supported by MSSQL.
+
+### Error code: DF-MSSQL-InvalidAuthConfiguration
+- **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified.
+- **Recommendation**: You can only specify one of the three auth methods (Key, ServicePrincipal and MI) in the related MSSQL linked service.
+
+### Error code: DF-MSSQL-InvalidCloudType
+- **Message**: Cloud type is invalid.
+- **Recommendation**: Check your cloud type in the related MSSQL linked service.
+
+### Error code: DF-SQLDW-InvalidBlobStagingConfiguration
+- **Message**: Blob storage staging properties should be specified.
+
+### Error code: DF-SQLDW-InvalidStorageType
+- **Message**: Storage type can either be blob or gen2.
+
+### Error code: DF-SQLDW-InvalidGen2StagingConfiguration
+- **Message**: ADLS Gen2 storage staging only support service principal key credential.
+
+### Error code: DF-SQLDW-InvalidConfiguration
+- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken is required.
+
+### Error code: DF-DELTA-InvalidConfiguration
+- **Message**: Timestamp and version can't be set at the same time.
+
+### Error code: DF-DELTA-KeyColumnMissed
+- **Message**: Key column(s) should be specified for non-insertable operations.
+
+### Error code: DF-DELTA-InvalidTableOperationSettings
+- **Message**: Recreate and truncate options can't be both specified.
+
+### Error code: DF-Excel-WorksheetConfigMissed
+- **Message**: Excel sheet name or index is required.
+- **Recommendation**: Check the parameter value and specify the sheet name or index to read the excel data.
+
+### Error code: DF-Excel-InvalidWorksheetConfiguration
+- **Message**: Excel sheet name and index cannot exist at the same time.
+- **Recommendation**: Check the parameter value and specify the sheet name or index to read the excel data.
+
+### Error code: DF-Excel-InvalidRange
+- **Message**: Invalid range is provided.
+- **Recommendation**: Check the parameter value and specify the valid range by the following reference: [Excel format in Azure Data Factory-Dataset properties](https://docs.microsoft.com/azure/data-factory/format-excel#dataset-properties).
+
+### Error code: DF-Excel-WorksheetNotExist
+- **Message**: Excel worksheet does not exist.
+- **Recommendation**: Check the parameter value and specify the valid sheet name or index to read the excel data.
+
+### Error code: DF-Excel-DifferentSchemaNotSupport
+- **Message**: Read excel files with different schema is not supported now.
+
+### Error code: DF-Excel-InvalidDataType
+- **Message**: Data type is not supported.
+
+### Error code: DF-Excel-InvalidFile
+- **Message**: Invalid excel file is provided while only .xlsx and .xls are supported.
+
+### Error code: DF-AdobeIntegration-InvalidMapToFilter
+- **Message**: Custom resource can only have one Key/Id mapped to filter.
+
+### Error code: DF-AdobeIntegration-InvalidPartitionConfiguration
+- **Message**: Only single partition is supported. Partition schema may be RoundRobin or Hash.
+- **Recommendation**: In AdobeIntegration settings, confirm you only have single partitions. The partition schema may be RoundRobin or Hash.
+
+### Error code: DF-AdobeIntegration-KeyColumnMissed
+- **Message**: Key must be specified for non-insertable operations.
+- **Recommendation**: Specify your key columns in AdobeIntegration settings for non-insertable operations.
+
+### Error code: DF-AdobeIntegration-InvalidPartitionType
+- **Message**: Partition type has to be roundRobin.
+- **Recommendation**: Confirm the partition type is roundRobin in AdobeIntegration settings.
+
+### Error code: DF-AdobeIntegration-InvalidPrivacyRegulation
+- **Message**: Only privacy regulation supported currently is gdpr.
+- **Recommendation**: Confirm the privacy regulation in AdobeIntegration settings is **'GDPR'**.
+ ## Miscellaneous troubleshooting tips - **Issue**: Unexpected exception occurred and execution failed. - **Message**: During Data Flow activity execution: Hit unexpected exception and execution failed.
defender-for-iot Agent Based Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-recommendations.md
Device recommendations provide insights and suggestions to improve device securi
| Severity | Name | Data Source | Description | |--|--|--|--|
-| Medium | Open Ports on device | Classic security module | A listening endpoint was found on the device. |
-| Medium | Permissive firewall policy found in one of the chains. | Classic security module | Allowed firewall policy found (INPUT/OUTPUT). Firewall policy should deny all traffic by default, and define rules to allow necessary communication to/from the device. |
-| Medium | Permissive firewall rule in the input chain was found | Classic security module | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
-| Medium | Permissive firewall rule in the output chain was found | Classic security module | A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
-| Medium | Operation system baseline validation has failed | Classic security module | Device doesn't comply with [CIS Linux benchmarks](https://www.cisecurity.org/cis-benchmarks/). |
+| Medium | Open Ports on device | Classic Defender-IoT-micro-agent| A listening endpoint was found on the device. |
+| Medium | Permissive firewall policy found in one of the chains. | Classic Defender-IoT-micro-agent| Allowed firewall policy found (INPUT/OUTPUT). Firewall policy should deny all traffic by default, and define rules to allow necessary communication to/from the device. |
+| Medium | Permissive firewall rule in the input chain was found | Classic Defender-IoT-micro-agent| A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
+| Medium | Permissive firewall rule in the output chain was found | Classic Defender-IoT-micro-agent| A rule in the firewall has been found that contains a permissive pattern for a wide range of IP addresses or ports. |
+| Medium | Operation system baseline validation has failed | Classic Defender-IoT-micro-agent| Device doesn't comply with [CIS Linux benchmarks](https://www.cisecurity.org/cis-benchmarks/). |
### Agent based operational recommendations
Operational recommendations provide insights and suggestions to improve security
| Severity | Name | Data Source | Description | |--|--|--|--|
-| Low | Agent sends unutilized messages | Classic security module | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
-| Low | Security twin configuration not optimal | Classic security module | Security twin configuration is not optimal. |
-| Low | Security twin configuration conflict | Classic security module | Conflicts were identified in the security twin configuration. | |
+| Low | Agent sends unutilized messages | Classic Defender-IoT-micro-agent| 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
+| Low | Security twin configuration not optimal | Classic Defender-IoT-micro-agent| Security twin configuration is not optimal. |
+| Low | Security twin configuration conflict | Classic Defender-IoT-micro-agent| Conflicts were identified in the security twin configuration. | |
## Next steps
defender-for-iot Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-security-alerts.md
For more information, see [customizable alerts](concept-customizable-security-al
| Name | Severity | Data Source | Description | Suggested remediation steps | |--|--|--|--|--| | **High** severity | | | |
-| Binary Command Line | High | Classic security module | LA Linux binary being called/executed from the command line was detected. This process may be legitimate activity, or an indication that your device is compromised. | Review the command with the user that ran it and check if this is something legitimately expected to run on the device. If not, escalate the alert to your information security team. |
-| Disable firewall | High | Classic security module | Possible manipulation of on-host firewall detected. Malicious actors often disable the on-host firewall in an attempt to exfiltrate data. | Review with the user that ran the command to confirm if this was legitimate expected activity on the device. If not, escalate the alert to your information security team. |
-| Port forwarding detection | High | Classic security module | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Possible attempt to disable Auditd logging detected | High | Classic security module | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. |
-| Reverse shells | High | Classic security module | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Successful Bruteforce attempt | High | Classic security module | Multiple unsuccessful login attempts were identified, followed by a successful login. Attempted Brute force attack may have succeeded on the device. | Review SSH Brute force alert and the activity on the devices. <br>If the activity was malicious:<br> Roll out password reset for compromised accounts.<br> Investigate and remediate (if found) devices for malware. |
-| Successful local login | High | Classic security module | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. |
-| Web shell | High | Classic security module | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Binary Command Line | High | Classic Defender-IoT-micro-agent | LA Linux binary being called/executed from the command line was detected. This process may be legitimate activity, or an indication that your device is compromised. | Review the command with the user that ran it and check if this is something legitimately expected to run on the device. If not, escalate the alert to your information security team. |
+| Disable firewall | High | Classic Defender-IoT-micro-agent | Possible manipulation of on-host firewall detected. Malicious actors often disable the on-host firewall in an attempt to exfiltrate data. | Review with the user that ran the command to confirm if this was legitimate expected activity on the device. If not, escalate the alert to your information security team. |
+| Port forwarding detection | High | Classic Defender-IoT-micro-agent | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Possible attempt to disable Auditd logging detected | High | Classic Defender-IoT-micro-agent | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. |
+| Reverse shells | High | Classic Defender-IoT-micro-agent | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Successful Bruteforce attempt | High | Classic Defender-IoT-micro-agent | Multiple unsuccessful login attempts were identified, followed by a successful login. Attempted Brute force attack may have succeeded on the device. | Review SSH Brute force alert and the activity on the devices. <br>If the activity was malicious:<br> Roll out password reset for compromised accounts.<br> Investigate and remediate (if found) devices for malware. |
+| Successful local login | High | Classic Defender-IoT-micro-agent | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. |
+| Web shell | High | Classic Defender-IoT-micro-agent | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
| **Medium** severity | | | |
-| Behavior similar to common Linux bots detected | Medium | Classic security module | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Behavior similar to Fairware ransomware detected | Medium | Classic security module | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Behavior similar to ransomware detected | Medium | Classic security module | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Crypto coin miner container image detected | Medium | Classic security module | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image.<br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket.<br> 3. Escalate the alert to the information security team. |
-| Crypto coin miner image | Medium | Classic security module | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. |
-| Detected suspicious use of the nohup command | Medium | Classic security module | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Detected suspicious use of the useradd command | Medium | Classic security module | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Exposed Docker daemon by TCP socket | Medium | Classic security module | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Failed local login | Medium | Classic security module | A failed local login attempt to the device was detected. | Make sure no unauthorized party has physical access to the device. |
-| File downloads from a known malicious source detected | Medium | Classic security module | Download of a file from a known malware source detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| htaccess file access detected | Medium | Classic security module | Analysis of host data detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running Apache Web software, including basic redirect functionality, and more advanced functions, such as basic password protection. Malicious actors often modify htaccess files on compromised machines to gain persistence. | Confirm this is legitimate expected activity on the host. If not, escalate the alert to your information security team. |
-| Known attack tool | Medium | Classic security module | A tool often associated with malicious users attacking other machines in some way was detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| IoT agent attempted and failed to parse the module twin configuration | Medium | Classic security module | The Defender for IoT security agent failed to parse the module twin configuration due to type mismatches in the configuration object | Validate your module twin configuration against the IoT agent configuration schema, fix all mismatches. |
-| Local host reconnaissance detected | Medium | Classic security module | Execution of a command normally associated with common Linux bot reconnaissance detected. | Review the suspicious command line to confirm that it was executed by a legitimate user. If not, escalate the alert to your information security team. |
-| Mismatch between script interpreter and file extension | Medium | Classic security module | Mismatch between the script interpreter and the extension of the script file provided as input detected. This type of mismatch is commonly associated with attacker script executions. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Possible backdoor detected | Medium | Classic security module | A suspicious file was downloaded and then run on a host in your subscription. This type of activity is commonly associated with the installation of a backdoor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Potential loss of data detected | Medium | Classic security module | Possible data egress condition detected using analysis of host data. Malicious actors often egress data from compromised machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Potential overriding of common files | Medium | Classic security module | Common executable overwritten on the device. Malicious actors are known to overwrite common files as a way to hide their actions or as a way to gain persistence. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Privileged container detected | Medium | Classic security module | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to host resources. If compromised, a malicious actor can use the privileged container to gain access to the host machine. | If the container doesn't need to run in privileged mode, remove the privileges from the container. |
-| Removal of system logs files detected | Medium | Classic security module | Suspicious removal of log files on the host detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Space after filename | Medium | Classic security module | Execution of a process with a suspicious extension detected using analysis of host data. Suspicious extensions may trick users into thinking files are safe to be opened and can indicate the presence of malware on the system. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspected malicious credentials access tools detected | Medium | Classic security module | Detection usage of a tool commonly associated with malicious attempts to access credentials. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious compilation detected | Medium | Classic security module | Suspicious compilation detected. Malicious actors often compile exploits on a compromised machine to escalate privileges. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious file download followed by file run activity | Medium | Classic security module | Analysis of host data detected a file that was downloaded and run in the same command. This technique is commonly used by malicious actors to get infected files onto victim machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
-| Suspicious IP address communication | Medium | Classic security module | Communication with a suspicious IP address detected. | Verify if the connection is legitimate. Consider blocking communication with the suspicious IP. |
+| Behavior similar to common Linux bots detected | Medium | Classic Defender-IoT-micro-agent | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Behavior similar to Fairware ransomware detected | Medium | Classic Defender-IoT-micro-agent | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Behavior similar to ransomware detected | Medium | Classic Defender-IoT-micro-agent | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Crypto coin miner container image detected | Medium | Classic Defender-IoT-micro-agent | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image.<br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket.<br> 3. Escalate the alert to the information security team. |
+| Crypto coin miner image | Medium | Classic Defender-IoT-micro-agent | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. |
+| Detected suspicious use of the nohup command | Medium | Classic Defender-IoT-micro-agent | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Detected suspicious use of the useradd command | Medium | Classic Defender-IoT-micro-agent | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Exposed Docker daemon by TCP socket | Medium | Classic Defender-IoT-micro-agent | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Failed local login | Medium | Classic Defender-IoT-micro-agent | A failed local login attempt to the device was detected. | Make sure no unauthorized party has physical access to the device. |
+| File downloads from a known malicious source detected | Medium | Classic Defender-IoT-micro-agent | Download of a file from a known malware source detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| htaccess file access detected | Medium | Classic Defender-IoT-micro-agent | Analysis of host data detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running Apache Web software, including basic redirect functionality, and more advanced functions, such as basic password protection. Malicious actors often modify htaccess files on compromised machines to gain persistence. | Confirm this is legitimate expected activity on the host. If not, escalate the alert to your information security team. |
+| Known attack tool | Medium | Classic Defender-IoT-micro-agent | A tool often associated with malicious users attacking other machines in some way was detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| IoT agent attempted and failed to parse the module twin configuration | Medium | Classic Defender-IoT-micro-agent | The Defender for IoT security agent failed to parse the module twin configuration due to type mismatches in the configuration object | Validate your module twin configuration against the IoT agent configuration schema, fix all mismatches. |
+| Local host reconnaissance detected | Medium | Classic Defender-IoT-micro-agent | Execution of a command normally associated with common Linux bot reconnaissance detected. | Review the suspicious command line to confirm that it was executed by a legitimate user. If not, escalate the alert to your information security team. |
+| Mismatch between script interpreter and file extension | Medium | Classic Defender-IoT-micro-agent | Mismatch between the script interpreter and the extension of the script file provided as input detected. This type of mismatch is commonly associated with attacker script executions. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Possible backdoor detected | Medium | Classic Defender-IoT-micro-agent | A suspicious file was downloaded and then run on a host in your subscription. This type of activity is commonly associated with the installation of a backdoor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Potential loss of data detected | Medium | Classic Defender-IoT-micro-agent | Possible data egress condition detected using analysis of host data. Malicious actors often egress data from compromised machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Potential overriding of common files | Medium | Classic Defender-IoT-micro-agent | Common executable overwritten on the device. Malicious actors are known to overwrite common files as a way to hide their actions or as a way to gain persistence. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Privileged container detected | Medium | Classic Defender-IoT-micro-agent | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to host resources. If compromised, a malicious actor can use the privileged container to gain access to the host machine. | If the container doesn't need to run in privileged mode, remove the privileges from the container. |
+| Removal of system logs files detected | Medium | Classic Defender-IoT-micro-agent | Suspicious removal of log files on the host detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Space after filename | Medium | Classic Defender-IoT-micro-agent | Execution of a process with a suspicious extension detected using analysis of host data. Suspicious extensions may trick users into thinking files are safe to be opened and can indicate the presence of malware on the system. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspected malicious credentials access tools detected | Medium | Classic Defender-IoT-micro-agent | Detection usage of a tool commonly associated with malicious attempts to access credentials. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious compilation detected | Medium | Classic Defender-IoT-micro-agent | Suspicious compilation detected. Malicious actors often compile exploits on a compromised machine to escalate privileges. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious file download followed by file run activity | Medium | Classic Defender-IoT-micro-agent | Analysis of host data detected a file that was downloaded and run in the same command. This technique is commonly used by malicious actors to get infected files onto victim machines. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. |
+| Suspicious IP address communication | Medium | Classic Defender-IoT-micro-agent | Communication with a suspicious IP address detected. | Verify if the connection is legitimate. Consider blocking communication with the suspicious IP. |
| **LOW** severity | | | |
-| Bash history cleared | Low | Classic security module | Bash history log cleared. Malicious actors commonly erase bash history to hide their own commands from appearing in the logs. | Review with the user that ran the command that the activity in this alert to see if you recognize this as legitimate administrative activity. If not, escalate the alert to the information security team. |
-| Device silent | Low | Classic security module | Device has not sent any telemetry data in the last 72 hours. | Make sure device is online and sending data. Check that the Azure Security Agent is running on the device. |
-| Failed Bruteforce attempt | Low | Classic security module | Multiple unsuccessful login attempts identified. Potential Brute force attack attempt failed on the device. | Review SSH Brute force alerts and the activity on the device. No further action required. |
-| Local user added to one or more groups | Low | Classic security module | New local user added to a group on this device. Changes to user groups are uncommon, and can indicate a malicious actor may be collecting extra permissions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
-| Local user deleted from one or more groups | Low | Classic security module | A local user was deleted from one or more groups. Malicious actors are known to use this method in an attempt to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
-| Local user deletion detected | Low | Classic security module | Deletion of a local user detected. Local user deletion is uncommon, a malicious actor may be trying to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Bash history cleared | Low | Classic Defender-IoT-micro-agent | Bash history log cleared. Malicious actors commonly erase bash history to hide their own commands from appearing in the logs. | Review with the user that ran the command that the activity in this alert to see if you recognize this as legitimate administrative activity. If not, escalate the alert to the information security team. |
+| Device silent | Low | Classic Defender-IoT-micro-agent | Device has not sent any telemetry data in the last 72 hours. | Make sure device is online and sending data. Check that the Azure Security Agent is running on the device. |
+| Failed Bruteforce attempt | Low | Classic Defender-IoT-micro-agent | Multiple unsuccessful login attempts identified. Potential Brute force attack attempt failed on the device. | Review SSH Brute force alerts and the activity on the device. No further action required. |
+| Local user added to one or more groups | Low | Classic Defender-IoT-micro-agent | New local user added to a group on this device. Changes to user groups are uncommon, and can indicate a malicious actor may be collecting extra permissions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Local user deleted from one or more groups | Low | Classic Defender-IoT-micro-agent | A local user was deleted from one or more groups. Malicious actors are known to use this method in an attempt to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
+| Local user deletion detected | Low | Classic Defender-IoT-micro-agent | Deletion of a local user detected. Local user deletion is uncommon, a malicious actor may be trying to deny access to legitimate users or to delete the history of their actions. | Verify if the change is consistent with the permissions required by the affected user. If the change is inconsistent, escalate to your Information Security team. |
## Next steps
defender-for-iot Agent Based Security Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/agent-based-security-custom-alerts.md
The following lists of Defender for IoT alerts are definable by you based on you
| Severity | Alert name | Data source | Description | Suggested remediation | |--|--|--|--|--|
-| Low | Custom alert - The number of active connections is outside the allowed range | Classic security module, Azure RTOS | Number of active connections within a specific time window is outside the currently configured and allowable range. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed connection list. |
-| Low | Custom alert - The outbound connection created to an IP that isn't allowed | Classic security module, Azure RTOS | An outbound connection was created to an IP that is outside your allowed IP list. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed IP list. |
-| Low | Custom alert - The number of failed local logins is outside the allowed range | Classic security module, Azure RTOS | The number of failed local logins within a specific time window is outside the currently configured and allowable range. | |
-| Low | Custom alert - The sign in of a user that is not on the allowed user list | Classic security module, Azure RTOS | A local user outside your allowed user list, logged in to the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
-| Low | Custom alert - A process was executed that is not allowed | Classic security module, Azure RTOS | A process that is not allowed was executed on the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
+| Low | Custom alert - The number of active connections is outside the allowed range | Classic Defender-IoT-micro-agent, Azure RTOS | Number of active connections within a specific time window is outside the currently configured and allowable range. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed connection list. |
+| Low | Custom alert - The outbound connection created to an IP that isn't allowed | Classic Defender-IoT-micro-agent, Azure RTOS | An outbound connection was created to an IP that is outside your allowed IP list. | Investigate the device logs. Learn where the connection originated and determine if it is benign or malicious. If malicious, remove possible malware and understand source. If benign, add the source to the allowed IP list. |
+| Low | Custom alert - The number of failed local logins is outside the allowed range | Classic Defender-IoT-micro-agent, Azure RTOS | The number of failed local logins within a specific time window is outside the currently configured and allowable range. | |
+| Low | Custom alert - The sign in of a user that is not on the allowed user list | Classic Defender-IoT-micro-agent, Azure RTOS | A local user outside your allowed user list, logged in to the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
+| Low | Custom alert - A process was executed that is not allowed | Classic Defender-IoT-micro-agent, Azure RTOS | A process that is not allowed was executed on the device. | If you are saving raw data, navigate to your log analytics account and use the data to investigate the device, identify the source, and then fix the allow/block list for those settings. If you are not currently saving raw data, go to the device and fix the allow/block list for those settings. |
| ## Next steps
defender-for-iot Azure Iot Security Local Configuration C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/azure-iot-security-local-configuration-c.md
Changes to the configuration file take place when the agent is restarted.
| TriggerdEventsInterval | ISO8601 string | Scheduler interval for triggered events collection | | ConnectionTimeout | ISO8601 string | Time period before the connection to IoThub gets timed out | | Authentication | JsonObject | Authentication configuration. This object contains all the information needed for authentication against IoTHub |
-| Identity | "DPS", "SecurityModule", "Device" | Authentication identity - DPS if authentication is made through DPS, SecurityModule if authentication is made via security module credentials or device if authentication is made with Device credentials |
+| Identity | "DPS", "SecurityModule", "Device" | Authentication identity - DPS if authentication is made through DPS, SecurityModule if authentication is made via Defender-IoT-micro-agentcredentials or device if authentication is made with Device credentials |
| AuthenticationMethod | "SasToken", "SelfSignedCertificate" | the user secret for authentication - Choose SasToken if the use secret is a Symmetric key, choose self-signed certificate if the secret is a self-signed certificate | | FilePath | Path to file (string) | Path to the file that contains the authentication secret | | HostName | string | The host name of the Azure IoT hub. usually <my-hub>.azure-devices.net |
defender-for-iot Azure Iot Security Local Configuration Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/azure-iot-security-local-configuration-csharp.md
For Windows:
| Configuration name | Possible values | Details | |:--|:|:--|
-| moduleName | string | Name of the security module identity. This name must correspond to the module identity name in the device. |
+| moduleName | string | Name of the Defender-IoT-micro-agent identity. This name must correspond to the module identity name in the device. |
| deviceId | string | ID of the device (as registered in Azure IoT Hub). | | schedulerInterval | TimeSpan string | Internal scheduler interval. | | gatewayHostname | string | Host name of the Azure Iot Hub. Usually <my-hub>.azure-devices.net |
defender-for-iot Azure Rtos Security Module Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/azure-rtos-security-module-api.md
Title: Security Module for Azure RTOS API
-description: Reference API for the Security Module for Azure RTOS.
+ Title: Defender-IoT-micro-agent for Azure RTOS API
+description: Reference API for the Defender-IoT-micro-agent for Azure RTOS.
documentationcenter: na
Last updated 09/07/2020
-# Security Module for Azure RTOS API
-This API is intended for use with the Security Module for Azure RTOS only. For additional resources, see the [Security Module for Azure RTOS GitHub resource](https://github.com/azure-rtos/azure-iot-preview/releases).
+# Defender-IoT-micro-agent for Azure RTOS API (preview)
-## Enable Security Module for Azure RTOS
+This API is intended for use with the Defender-IoT-micro-agent for Azure RTOS only. For additional resources, see the [Defender-IoT-micro-agent for Azure RTOS GitHub resource](https://github.com/azure-rtos/azure-iot-preview/releases).
+
+## Enable Defender-IoT-micro-agent for Azure RTOS
**nx_azure_iot_security_module_enable**
UINT nx_azure_iot_security_module_enable(NX_AZURE_IOT *nx_azure_iot_ptr);
### Description
-This routine enables the Azure IoT Security Module subsystem. An internal state machine manages collection of security events and sends them to Azure IoT Hub. Only one NX_AZURE_IOT_SECURITY_MODULE instance is required and needed to manage data collection.
+This routine enables the Azure IoT Defender-IoT-micro-agent subsystem. An internal state machine manages collection of security events and sends them to Azure IoT Hub. Only one NX_AZURE_IOT_SECURITY_MODULE instance is required and needed to manage data collection.
### Parameters
This routine enables the Azure IoT Security Module subsystem. An internal state
Threads
-## Disable Azure IoT Security Module
+## Disable Azure IoT Defender-IoT-micro-agent
**nx_azure_iot_security_module_disable**
UINT nx_azure_iot_security_module_disable(NX_AZURE_IOT *nx_azure_iot_ptr);
### Description
-This routine disables the Azure IoT Security Module subsystem.
+This routine disables the Azure IoT Defender-IoT-micro-agent subsystem.
### Parameters
Threads
## Next steps
-To learn more about how to get started with Azure RTOS Security Module, see the following articles:
+To learn more about how to get started with Azure RTOS Defender-IoT-micro-agent, see the following articles:
-- Review the Defender for IoT RTOS security module [overview](iot-security-azure-rtos.md).
+- Review the Defender for IoT RTOS Defender-IoT-micro-agent [overview](iot-security-azure-rtos.md).
defender-for-iot Concept Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-baseline.md
Baseline custom checks establish a custom list of checks for each device baselin
1. Upload the **baseline custom checks** file to the device.
-1. Add baseline properties to the security module and click **Save**.
+1. Add baseline properties to the Defender-IoT-micro-agent and click **Save**.
### Baseline custom check file example
defender-for-iot Concept Rtos Security Alerts Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-rtos-security-alerts-recommendations.md
Title: Security Module for Azure RTOS built-in & customizable alerts and recommendations
-description: Learn about security alerts and recommended remediation using the Azure IoT Security Module -RTOS.
+ Title: Defender-IoT-micro-agent for Azure RTOS built-in & customizable alerts and recommendations
+description: Learn about security alerts and recommended remediation using the Azure IoT Defender-IoT-micro-agent -RTOS.
documentationcenter: na
Last updated 09/07/2020
-# Security Module for Azure RTOS security alerts and recommendations (preview)
+# Defender-IoT-micro-agent for Azure RTOS security alerts and recommendations (preview)
-Security Module for Azure RTOS continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to potential malicious activity and suspicious system modifications. You can also create custom alerts based on your knowledge of expected device behavior and baselines.
+Defender-IoT-micro-agent for Azure RTOS continuously analyzes your IoT solution using advanced analytics and threat intelligence to alert you to potential malicious activity and suspicious system modifications. You can also create custom alerts based on your knowledge of expected device behavior and baselines.
-A Security Module for Azure RTOS alert acts as an indicator of potential compromise, and should be investigated and remediated. A Security Module for Azure RTOS recommendation identifies weak security posture to be remediated and updated.
+A Defender-IoT-micro-agent for Azure RTOS alert acts as an indicator of potential compromise, and should be investigated and remediated. A Defender-IoT-micro-agent for Azure RTOS recommendation identifies weak security posture to be remediated and updated.
In this article, you'll find a list of built-in alerts and recommendations that are triggered based on the default ranges, and customizable with your own values, based on expected or baseline behavior.
-For more information on how alert customization works in the Defender for IoT service, see [customizable alerts](concept-customizable-security-alerts.md). The specific alerts and recommendations available for customization when using the Security Module for Azure RTOS are detailed in the following tables.
+For more information on how alert customization works in the Defender for IoT service, see [customizable alerts](concept-customizable-security-alerts.md). The specific alerts and recommendations available for customization when using the Defender-IoT-micro-agent for Azure RTOS are detailed in the following tables.
-## Security Module for Azure RTOS supported security alerts
+## Defender-IoT-micro-agent for Azure RTOS supported security alerts
### Device-related security alerts
For more information on how alert customization works in the Defender for IoT se
|Deleted certificate | Detected deletion of a certificate from an IoT Hub | |New certificate | Detected addition of new certificate to an IoT Hub |
-## Security Module for Azure RTOS supported customizable alerts
+## Defender-IoT-micro-agent for Azure RTOS supported customizable alerts
### Device related customizable alerts
For more information on how alert customization works in the Defender for IoT se
|Updates to twin modules | Number of updates to twin modules outside the allowed range | |Unauthorized operations | Number of unauthorized operations outside the allowed range |
-## Security Module for Azure RTOS supported recommendations
+## Defender-IoT-micro-agent for Azure RTOS supported recommendations
### Device-related recommendations
For a complete list of all Defender for IoT service related alerts and recommend
## Next steps -- [Quickstart: Security Module for Azure RTOS](quickstart-azure-rtos-security-module.md)-- [Configure and customize Security Module for Azure RTOS](how-to-azure-rtos-security-module.md)-- Refer to the [Security Module for Azure RTOS API](azure-rtos-security-module-api.md)
+- [Quickstart: Defender-IoT-micro-agent for Azure RTOS](quickstart-azure-rtos-security-module.md)
+- [Configure and customize Defender-IoT-micro-agent for Azure RTOS](how-to-azure-rtos-security-module.md)
+- Refer to the [Defender-IoT-micro-agent for Azure RTOS API](azure-rtos-security-module-api.md)
defender-for-iot Concept Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-rtos-security-module.md
Title: Conceptual explanation of the basics of the Security Module for Azure RTOS
-description: Learn the basics about the Security Module for Azure RTOS concepts and workflow.
+ Title: Conceptual explanation of the basics of the Defender-IoT-micro-agent for Azure RTOS
+description: Learn the basics about the Defender-IoT-micro-agent for Azure RTOS concepts and workflow.
documentationcenter: na
Last updated 09/09/2020
-# Security Module for Azure RTOS (preview)
+# Defender-IoT-micro-agent for Azure RTOS (preview)
-Use this article to get a better understanding of the Security Module for Azure RTOS, including features and benefits as well as links to relevant configuration and reference resources.
+Use this article to get a better understanding of the Defender-IoT-micro-agent for Azure RTOS, including features and benefits as well as links to relevant configuration and reference resources.
-## Azure RTOS IoT security module
+## Azure RTOS IoT Defender-IoT-micro-agent
-Security Module for Azure RTOS provides a comprehensive security solution for Azure RTOS devices as part of the NetX Duo offering. Within the NetX Duo offering, Azure RTOS ships with the Azure IoT Security Module built-in, and provides coverage for common threats on your real-time operating system devices once activated.
+Defender-IoT-micro-agent for Azure RTOS provides a comprehensive security solution for Azure RTOS devices as part of the NetX Duo offering. Within the NetX Duo offering, Azure RTOS ships with the Azure IoT Defender-IoT-micro-agent built-in, and provides coverage for common threats on your real-time operating system devices once activated.
-The Security Module for Azure RTOS runs in the background, and provides a seamless user experience, while sending security messages using each customer's unique connections to their IoT Hub. The Security Module for Azure RTOS is enabled by default.
+The Defender-IoT-micro-agent for Azure RTOS runs in the background, and provides a seamless user experience, while sending security messages using each customer's unique connections to their IoT Hub. The Defender-IoT-micro-agent for Azure RTOS is enabled by default.
## Azure RTOS NetX Duo
The module offers the following features:
- **Device behavior baselines based on custom alerts** - **Improve device security hygiene**
-## Security Module for Azure RTOS architecture
+## Defender-IoT-micro-agent for Azure RTOS architecture
-The Security Module for Azure RTOS is initialized by the Azure IoT middleware platform and uses IoT Hub clients to send security telemetry to the Hub.
+The Defender-IoT-micro-agent for Azure RTOS is initialized by the Azure IoT middleware platform and uses IoT Hub clients to send security telemetry to the Hub.
-The Security Module for Azure RTOS monitors the following device activity and information using three collectors:
+The Defender-IoT-micro-agent for Azure RTOS monitors the following device activity and information using three collectors:
- Device network activity **TCP**, **UDP**, and **ICM** - System information as **Threadx** and **NetX Duo** versions - Heartbeat events
Each time interval is configurable and the IoT connectors can be enabled and dis
## Supported security alerts and recommendations
-The Security Module for Azure RTOS supports specific security alerts and recommendations. Make sure to [review and customize the relevant alert and recommendation values](concept-rtos-security-alerts-recommendations.md) for your service after completing the initial configuration.
+The Defender-IoT-micro-agent for Azure RTOS supports specific security alerts and recommendations. Make sure to [review and customize the relevant alert and recommendation values](concept-rtos-security-alerts-recommendations.md) for your service after completing the initial configuration.
## Ready to begin?
-Security Module for Azure RTOS is provided as a free download for your IoT devices. The Defender for IoT cloud service is available with a 30-day trial per Azure subscription. [Download the security module now](https://github.com/azure-rtos/azure-iot-preview/releases) and let's get started.
+Defender-IoT-micro-agent for Azure RTOS is provided as a free download for your IoT devices. The Defender for IoT cloud service is available with a 30-day trial per Azure subscription. [Download the Defender-IoT-micro-agent now](https://github.com/azure-rtos/azure-iot-preview/releases) and let's get started.
## Next steps -- Get started with Security Module for Azure RTOS [prerequisites and setup](quickstart-azure-rtos-security-module.md).-- Learn more about Security Module for Azure RTOS [security alerts and recommendation support](concept-rtos-security-alerts-recommendations.md). -- Use the Security Module for Azure RTOS [reference API](azure-rtos-security-module-api.md).
+- Get started with Defender-IoT-micro-agent for Azure RTOS [prerequisites and setup](quickstart-azure-rtos-security-module.md).
+- Learn more about Defender-IoT-micro-agent for Azure RTOS [security alerts and recommendation support](concept-rtos-security-alerts-recommendations.md).
+- Use the Defender-IoT-micro-agent for Azure RTOS [reference API](azure-rtos-security-module-api.md).
defender-for-iot Concept Security Agent Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-agent-authentication-methods.md
This article explains the different authentication methods you can use with the AzureIoTSecurity agent to authenticate with the IoT Hub.
-For each device onboarded to Defender for IoT in the IoT Hub, a security module is required. To authenticate the device, Defender for IoT can use one of two methods. Choose the method that works best for your existing IoT solution.
+For each device onboarded to Defender for IoT in the IoT Hub, a Defender-IoT-micro-agent is required. To authenticate the device, Defender for IoT can use one of two methods. Choose the method that works best for your existing IoT solution.
- SecurityModule option - Device option
For each device onboarded to Defender for IoT in the IoT Hub, a security module
The two methods for the Defender for IoT AzureIoTSecurity agent to perform authentication: -- **SecurityModule** authentication mode<br>
-The agent is authenticated using the security module identity independently of the device identity.
-Use this authentication type if you would like the security agent to use a dedicated authentication method through security module (symmetric key only).
+- **Defender-IoT-micro-agent** authentication mode<br>
+The agent is authenticated using the Defender-IoT-micro-agent identity independently of the device identity.
+Use this authentication type if you would like the security agent to use a dedicated authentication method through Defender-IoT-micro-agent (symmetric key only).
- **Device** authentication mode<br>
-In this method, the security agent first authenticates with the device identity. After the initial authentication, the Defender for IoT agent performs a **REST** call to the IoT Hub using the REST API with the authentication data of the device. The Defender for IoT agent then requests the security module authentication method and data from the IoT Hub. In the final step, the Defender for IoT agent performs an authentication against the Defender for IoT module.
+In this method, the security agent first authenticates with the device identity. After the initial authentication, the Defender for IoT agent performs a **REST** call to the IoT Hub using the REST API with the authentication data of the device. The Defender for IoT agent then requests the Defender-IoT-micro-agent authentication method and data from the IoT Hub. In the final step, the Defender for IoT agent performs an authentication against the Defender for IoT module.
Use this authentication type if you would like the security agent to reuse an existing device authentication method (self-signed certificate or symmetric key).
defender-for-iot Concept Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/concept-security-module.md
Title: Security module and device twins
-description: Learn about the concept of security module twins and how they are used in Defender for IoT.
+ Title: Defender-IoT-micro-agent and device twins
+description: Learn about the concept of Defender-IoT-micro-agent twins and how they are used in Defender for IoT.
documentationcenter: na
Last updated 07/24/2019
-# Security module
+# Defender-IoT-micro-agent
This article explains how Defender for IoT uses device twins and modules.
Defender for IoT offers full integration with your existing IoT device managemen
Learn more about the concept of [device twins](../iot-hub/iot-hub-devguide-device-twins.md) in Azure IoT Hub.
-## Security module twins
+## Defender-IoT-micro-agent twins
-Defender for IoT maintains a security module twin for each device in the service.
-The security module twin holds all the information relevant to device security for each specific device in your solution.
-Device security properties are maintained in a dedicated security module twin for safer communication and for enabling updates and maintenance that requires fewer resources.
+Defender for IoT maintains a Defender-IoT-micro-agent twin for each device in the service.
+The Defender-IoT-micro-agent twin holds all the information relevant to device security for each specific device in your solution.
+Device security properties are maintained in a dedicated Defender-IoT-micro-agent twin for safer communication and for enabling updates and maintenance that requires fewer resources.
-See [Create security module twin](quickstart-create-security-twin.md) and [Configure security agents](how-to-agent-configuration.md) to learn how to create, customize, and configure the twin. See [Understanding module twins](../iot-hub/iot-hub-devguide-module-twins.md) to learn more about the concept of module twins in IoT Hub.
+See [Create Defender-IoT-micro-agent twin](quickstart-create-security-twin.md) and [Configure security agents](how-to-agent-configuration.md) to learn how to create, customize, and configure the twin. See [Understanding module twins](../iot-hub/iot-hub-devguide-module-twins.md) to learn more about the concept of module twins in IoT Hub.
## See also
defender-for-iot Edge Security Module Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/edge-security-module-deprecation.md
This article describes Azure Defender for IoT features and support for different capabilities within Defender for IoT.
-## Defender for IoT C, C#, and Edge security module deprecation
+## Defender for IoT C, C#, and Edge Defender-IoT-micro-agent deprecation
-The new micro agent will replace the current C, C#, and Edge security module.ΓÇ»
+The new micro agent will replace the current C, C#, and Edge Defender-IoT-micro-agent.ΓÇ»
-The new micro agent is based on the knowledge, and experience gathered from the exiting security module development, customers, and partners feedback with four important improvements:
+The new micro agent is based on the knowledge, and experience gathered from the exiting Defender-IoT-micro-agent development, customers, and partners feedback with four important improvements:
- **Depth security value**: The new agent will run on the host level, which will provide more visibility to the underlying operations of the device, and to allow for better security coverage.
defender-for-iot Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/event-aggregation.md
Title: Security module classic event aggregation
+ Title: Defender-IoT-micro-agent classic event aggregation
description: Learn about Defender for IoT event aggregation.
Last updated 1/20/2021
-# Security module classic event aggregation
+# Defender-IoT-micro-agent classic event aggregation
Defender for IoT security agents collects data and system events from your local device and send this data to the Azure cloud for processing and analytics. The security agent collects many types of device events including new process and new connection events. Both new process and new connection events may legitimately occur frequently on a device within a second, and while important for robust and comprehensive security, the number of messages security agents are forced to send may quickly reach or exceed your IoT Hub quota and cost limits. However, these events contain highly valuable security information that is crucial to protecting your device.
defender-for-iot How To Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-agent-configuration.md
To use a default property value, remove the property from the configuration obje
1. Click on **Module Identity Twin**.
-1. Edit the properties you wish to change in the security module.
+1. Edit the properties you wish to change in the Defender-IoT-micro-agent.
For example, to configure connection events as high priority and collect high priority events every 7 minutes, use the following configuration.
defender-for-iot How To Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-azure-rtos-security-module.md
Title: Configure and customize Security Module for Azure RTOS
-description: Learn about how to configure and customize your Security Module for Azure RTOS.
+ Title: Configure and customize Defender-IoT-micro-agent for Azure RTOS
+description: Learn about how to configure and customize your Defender-IoT-micro-agent for Azure RTOS.
documentationcenter: na
Last updated 03/07/2021
-# Configure and customize Defender-IoT-micro-agent for Azure RTOS GA
+# Configure and customize Defender-IoT-micro-agent for Azure RTOS (preview)
This article describes how to configure the Defender-IoT-micro-agent for your Azure RTOS device, to meet your network, bandwidth, and memory requirements.
You can enable and configure Log Analytics to investigate device events and acti
## Next steps -- Review and customize Security Module for Azure RTOS [security alerts and recommendations](concept-rtos-security-alerts-recommendations.md)-- Refer to the [Security Module for Azure RTOS API](azure-rtos-security-module-api.md) as needed.+
+- Review and customize Defender-IoT-micro-agent for Azure RTOS [security alerts and recommendations](concept-rtos-security-alerts-recommendations.md)
+- Refer to the [Defender-IoT-micro-agent for Azure RTOS API](azure-rtos-security-module-api.md) as needed.
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-create-and-manage-users.md
description: Create and manage users of sensors and the on-premises management c
Previously updated : 1/3/2021 Last updated : 03/03/2021
Two types of LDAP-based authentication are supported:
### Active Directory and Defender for IoT permissions
-You can associate Active Directory groups defined here with specific permission levels. For example, configure a specific Active Directory group and assign RO permissions to all users in the group. See [Create and manage users](how-to-create-and-manage-users.md) for details.
+You can associate Active Directory groups defined here with specific permission levels. For example, configure a specific Active Directory group and assign Read Only permissions to all users in the group.
To configure Active Directory:
To configure Active Directory:
:::image type="content" source="media/how-to-setup-active-directory/ad-system-settings-v2.png" alt-text="View your Active Directory system settings.":::
-1. On the **System Settings** pane, select **Active Directory**.
+2. On the **System Settings** pane, select **Active Directory**.
:::image type="content" source="media/how-to-setup-active-directory/ad-configurations-v2.png" alt-text="Edit your Active Directory configurations.":::
-1. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
+3. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
:::image type="content" source="media/how-to-setup-active-directory/ad-integration-enabled-v2.png" alt-text="Enter the parameters to configure Active Directory.":::
To configure Active Directory:
> - For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase. > - You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
-1. Set the Active Directory server parameters, as follows:
+4. Set the Active Directory server parameters, as follows:
| Server parameter | Description | |--|--|
To configure Active Directory:
| Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. | | Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
+#### ActiveDirectory Groups for the On-premises management console
+
+If you are creating Active Directory groups for on-premises management console users, you must create an Access Group rule for each Active Directory group. On-premises management console Active Directory credentials will not work if an Access Group rule does not exists for the Active Directory user group. See [Define global access control](how-to-define-global-user-access-control.md).
+ 1. Select **Save**.
-1. To add a trusted server, select **Add Server** and configure another server.
+2. To add a trusted server, select **Add Server** and configure another server.
-## Resetting a user's password for the sensor or on-premises management console
+## Resetting passwords
### CyberX or Support user
defender-for-iot How To Deploy Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-agent.md
The C-based security agent has a lower memory footprint, and is the ideal choice
| **[Authentication](concept-security-agent-authentication-methods.md) to IoT Hub** | Yes | Yes | | **Security data [collection](how-to-agent-configuration.md#supported-security-events)** | Yes | Yes | | **Event aggregation** | Yes | Yes |
-| **Remote configuration through [security module twin](concept-security-module.md)** | Yes | Yes |
+| **Remote configuration through [Defender-IoT-micro-agent twin](concept-security-module.md)** | Yes | Yes |
## Security agent installation guidelines
defender-for-iot How To Deploy Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-edge.md
Title: Deploy IoT Edge security module
+ Title: Deploy IoT Edge Defender-IoT-micro-agent
description: Learn about how to deploy a Defender for IoT security agent on IoT Edge.
Last updated 1/30/2020
-# Deploy a security module on your IoT Edge device
+# Deploy a Defender-IoT-micro-agent on your IoT Edge device
**Defender for IoT** module provides a comprehensive security solution for your IoT Edge devices.
-The security module collects, aggregates, and analyzes raw security data from your Operating System and Container system into actionable security recommendations and alerts.
-To learn more, see [Security module for IoT Edge](security-edge-architecture.md).
+The Defender-IoT-micro-agent collects, aggregates, and analyzes raw security data from your Operating System and Container system into actionable security recommendations and alerts.
+To learn more, see [Defender-IoT-micro-agent for IoT Edge](security-edge-architecture.md).
-In this article, you'll learn how to deploy a security module on your IoT Edge device.
+In this article, you'll learn how to deploy a Defender-IoT-micro-agent on your IoT Edge device.
-## Deploy security module
+## Deploy Defender-IoT-micro-agent
-Use the following steps to deploy a Defender for IoT security module for IoT Edge.
+Use the following steps to deploy a Defender for IoT Defender-IoT-micro-agent for IoT Edge.
### Prerequisites
Complete each step to complete your IoT Edge deployment for Defender for IoT.
## Diagnostic steps
-If you encounter an issue, container logs are the best way to learn about the state of an IoT Edge security module device. Use the commands and tools in this section to gather information.
+If you encounter an issue, container logs are the best way to learn about the state of an IoT Edge Defender-IoT-micro-agent device. Use the commands and tools in this section to gather information.
### Verify the required containers are installed and functioning as expected
defender-for-iot How To Deploy Linux C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-linux-c.md
For other platforms and agent flavors, see [Choose the right security agent](how
1. To deploy the security agent, local admin rights are required on the machine you wish to install on (sudo).
-1. [Create a security module](quickstart-create-security-twin.md) for the device.
+1. [Create a Defender-IoT-micro-agent](quickstart-create-security-twin.md) for the device.
## Installation
defender-for-iot How To Deploy Linux Cs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-linux-cs.md
For other platforms and agent flavors, see [Choose the right security agent](how
1. To deploy the security agent, local admin rights are required on the machine you wish to install on.
-1. [Create a security module](quickstart-create-security-twin.md) for the device.
+1. [Create a Defender-IoT-micro-agent](quickstart-create-security-twin.md) for the device.
## Installation
defender-for-iot How To Deploy Windows Cs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-deploy-windows-cs.md
For other platforms and agent flavors, see [Choose the right security agent](how
1. Local admin rights on the machine you wish to install on.
-1. [Create a security module](quickstart-create-security-twin.md) for the device.
+1. [Create a Defender-IoT-micro-agent](quickstart-create-security-twin.md) for the device.
## Installation
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-install-software.md
The Defender for IoT appliance sensor connects to a SPAN port or network TAP and
The following rack mount appliances are available:
-| **Deployment type** | **Corporate** | **Enterprise** | **SMB** | |
+| **Deployment type** | **Corporate** | **Enterprise** | **SMB** | **Line** |
|--|--|--|--|--| | **Model** | HPE ProLiant DL360 | Dell PowerEdge R340 XL | HPE ProLiant DL20 | HPE ProLiant DL20 | | **Monitoring ports** | up to 15 RJ45 or 8 OPT | up to 9 RJ45 or 6 OPT | up to 8 RJ45 or 6 OPT | 4 RJ45 |
To access the file:
1. Sign in to your Defender for IoT account.
-2. Go to the **Network sensor** or **On-premises management console** page and select a version to download.
+1. Go to the **Network sensor** or **On-premises management console** page and select a version to download.
### Install from DVD
To prepare a disk on a key:
1. Run Rufus and select **SENSOR ISO**.
-2. Connect the disk on a key to the front panel.
+1. Connect the disk on a key to the front panel.
-3. Set the BIOS of the server to boot from the USB.
+1. Set the BIOS of the server to boot from the USB.
## Dell PowerEdgeR340XL installation
To install the Dell PowerEdge R340XL appliance, you need:
:::image type="content" source="media/tutorial-install-components/view-of-dell-poweredge-r340-front-panel.jpg" alt-text="Dell PowerEdge R340 front panel."::: 1. Left control panel
- 2. Optical drive (optional)
- 3. Right control panel
- 4. Information tag
- 5. Drives
+ 1. Optical drive (optional)
+ 1. Right control panel
+ 1. Information tag
+ 1. Drives
### Dell PowerEdge R340 back panel :::image type="content" source="media/tutorial-install-components/view-of-dell-poweredge-r340-back-panel.jpg" alt-text="Dell PowerEdge R340 back panel."::: 1. Serial port
-2. NIC port (Gb 1)
-3. NIC port (Gb 1)
-4. Half-height PCIe
-5. Full-height PCIe expansion card slot
-6. Power supply unit 1
-7. Power supply unit 2
-8. System identification
-9. System status indicator cable port (CMA) button
-10. USB 3.0 port (2)
-11. iDRAC9 dedicated network port
-12. VGA port
+1. NIC port (Gb 1)
+1. NIC port (Gb 1)
+1. Half-height PCIe
+1. Full-height PCIe expansion card slot
+1. Power supply unit 1
+1. Power supply unit 2
+1. System identification
+1. System status indicator cable port (CMA) button
+1. USB 3.0 port (2)
+1. iDRAC9 dedicated network port
+1. VGA port
### Dell BIOS configuration
To configure Dell BIOS:
1. [Configure the iDRAC IP address](#configure-idrac-ip-address)
-2. [Import the BIOS configuration file](#import-the-bios-configuration-file)
+1. [Import the BIOS configuration file](#import-the-bios-configuration-file)
#### Configure iDRAC IP address 1. Power up the sensor.
-2. If the OS is already installed, select the F2 key to enter the BIOS configuration.
+1. If the OS is already installed, select the F2 key to enter the BIOS configuration.
-3. Select **iDRAC Settings**.
+1. Select **iDRAC Settings**.
-4. Select **Network**.
+1. Select **Network**.
> [!NOTE] > During the installation, you must configure the default iDRAC IP address and password mentioned in the following steps. After the installation, you change these definitions.
-5. Change the static IPv4 address to **10.100.100.250**.
+1. Change the static IPv4 address to **10.100.100.250**.
-6. Change the static subnet mask to **255.255.255.0**.
+1. Change the static subnet mask to **255.255.255.0**.
:::image type="content" source="media/tutorial-install-components/idrac-network-settings-screen-v2.png" alt-text="Screenshot that shows the static subnet mask.":::
-7. Select **Back** > **Finish**.
+1. Select **Back** > **Finish**.
#### Import the BIOS configuration file
This article describes how to configure the BIOS by using the configuration file
:::image type="content" source="media/tutorial-install-components/idrac-port.png" alt-text="Screenshot of the preconfigured IP address port.":::
-2. Open a browser and enter **10.100.100.250** to connect to iDRAC web interface.
+1. Open a browser and enter **10.100.100.250** to connect to iDRAC web interface.
-3. Sign in with Dell default administrator privileges:
+1. Sign in with Dell default administrator privileges:
- Username: **root** - Password: **calvin**
-4. The appliance's credentials are:
+1. The appliance's credentials are:
- Username: **XXX**
This article describes how to configure the BIOS by using the configuration file
> - You're the only user who is currently connected to iDRAC. > - The system is not in the BIOS menu.
-5. Go to **Configuration** > **Server Configuration Profile**. Set the following parameters:
+1. Go to **Configuration** > **Server Configuration Profile**. Set the following parameters:
:::image type="content" source="media/tutorial-install-components/configuration-screen.png" alt-text="Screenshot that shows the configuration of your server profile.":::
This article describes how to configure the BIOS by using the configuration file
| Import Components | Select **BIOS, NIC, RAID**. | | Maximum wait time | Select **20 minutes**. |
-6. Select **Import**.
+1. Select **Import**.
-7. To monitor the process, go to **Maintenance** > **Job Queue**.
+1. To monitor the process, go to **Maintenance** > **Job Queue**.
:::image type="content" source="media/tutorial-install-components/view-the-job-queue.png" alt-text="Screenshot that shows Job Queue.":::
To manually configure:
- If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
-2. After you access the BIOS, go to **Device Settings**.
+1. After you access the BIOS, go to **Device Settings**.
-3. Choose the RAID-controlled configuration by selecting **Integrated RAID controller 1: Dell PERC\<PERC H330 Adapter\> Configuration Utility**.
+1. Choose the RAID-controlled configuration by selecting **Integrated RAID controller 1: Dell PERC\<PERC H330 Adapter\> Configuration Utility**.
-4. Select **Configuration Management**.
+1. Select **Configuration Management**.
-5. Select **Create Virtual Disk**.
+1. Select **Create Virtual Disk**.
-6. In the **Select RAID Level** field, select **RAID5**. In the **Virtual Disk Name** field, enter **ROOT** and select **Physical Disks**.
+1. In the **Select RAID Level** field, select **RAID5**. In the **Virtual Disk Name** field, enter **ROOT** and select **Physical Disks**.
-7. Select **Check All** and then select **Apply Changes**
+1. Select **Check All** and then select **Apply Changes**
-8. Select **Ok**.
+1. Select **Ok**.
-9. Scroll down and select **Create Virtual Disk**.
+1. Scroll down and select **Create Virtual Disk**.
-10. Select the **Confirm** check box and select **Yes**.
+1. Select the **Confirm** check box and select **Yes**.
-11. Select **OK**.
+1. Select **OK**.
-12. Return to the main screen and select **System BIOS**.
+1. Return to the main screen and select **System BIOS**.
-13. Select **Boot Settings**.
+1. Select **Boot Settings**.
-14. For the **Boot Mode** option, select **BIOS**.
+1. For the **Boot Mode** option, select **BIOS**.
-15. Select **Back**, and then select **Finish** to exit the BIOS settings.
+1. Select **Back**, and then select **Finish** to exit the BIOS settings.
### Software installation (Dell R340)
To install:
- Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
-2. In the **Map CD/DVD** section, select **Choose File**.
+1. In the **Map CD/DVD** section, select **Choose File**.
-3. Choose the version ISO image file for this version from the dialog box that opens.
+1. Choose the version ISO image file for this version from the dialog box that opens.
-4. Select the **Map Device** button.
+1. Select the **Map Device** button.
:::image type="content" source="media/tutorial-install-components/mapped-device-on-virtual-media-screen-v2.png" alt-text="Screenshot that shows a mapped device.":::
-5. The media is mounted. Select **Close**.
+1. The media is mounted. Select **Close**.
-6. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Consul Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which will start the Ctrl+Alt+Delete sequence.
+1. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Consul Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which will start the Ctrl+Alt+Delete sequence.
-7. Select **English**.
+1. Select **English**.
-8. Select **SENSOR-RELEASE-\<version\> Enterprise**.
+1. Select **SENSOR-RELEASE-\<version\> Enterprise**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows version selection.":::
-9. Define the appliance profile and network properties:
+1. Define the appliance profile and network properties:
:::image type="content" source="media/tutorial-install-components/appliance-profile-screen-v2.png" alt-text="Screenshot that shows the appliance profile.":::
To install:
| **appliance hostname:** | - | | **DNS:** | - | | **default gateway IP address:** | - |
- | **input interfaces:** | The system generates the list of input interfaces for you. To mirror the input interfaces, copy all the items presented in the list with a comma separator. Note that there's no need to configure the bridge interface. This option is used for special use cases only. |
+ | **input interfaces:** | The system generates the list of input interfaces for you. To mirror the input interfaces, copy all the items presented in the list with a comma separator.You do not have to configure the bridge interface. This option is used for special use cases only. |
-10. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
+1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
-11. Save the appliance ID and passwords. You'll need these credentials to access the platform the first time you use it.
+1. Save the appliance ID and passwords. You'll need these credentials to access the platform the first time you use it.
-12. Select **Enter** to continue.
+1. Select **Enter** to continue.
## HPE ProLiant DL20 installation
To enable and update the password:
:::image type="content" source="media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
-2. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
:::image type="content" source="media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
To enable and update the password:
1. Enter the IP address, subnet mask, and gateway IP address.
-3. Select **F10: Save**.
+1. Select **F10: Save**.
-4. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
-5. Select **Edit/Remove User**. The administrator is the only default user defined.
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
-6. Change the default password and select **F10: Save**.
+1. Change the default password and select **F10: Save**.
### Configure the HPE BIOS
To configure the HPE BIOS:
1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
-2. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
-3. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
-4. Select **Esc** twice to close the **System Configuration** form.
+1. Select **Esc** twice to close the **System Configuration** form.
#### For the enterprise appliance 1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
-2. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
+1. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
#### For the SMB appliance 1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
-2. Select **Proceed to Next Form**.
+1. Select **Proceed to Next Form**.
-3. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
+1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
-4. Select **Proceed to Next Form**.
+1. Select **Proceed to Next Form**.
-5. In the **Logical Drive Label** form, enter **Logical Drive 1**.
+1. In the **Logical Drive Label** form, enter **Logical Drive 1**.
-6. Select **Submit Changes**.
+1. Select **Submit Changes**.
-7. In the **Submit** form, select **Back to Main Menu**.
+1. In the **Submit** form, select **Back to Main Menu**.
-8. Select **F10: Save** and then press **Esc** twice.
+1. Select **F10: Save** and then press **Esc** twice.
-9. In the **System Utilities** window, select **One-Time Boot Menu**.
+1. In the **System Utilities** window, select **One-Time Boot Menu**.
-10. In the **One-Time Boot Menu** form, select **Legacy BIOS One-Time Boot Menu**.
+1. In the **One-Time Boot Menu** form, select **Legacy BIOS One-Time Boot Menu**.
-11. The **Booting in Legacy** and **Boot Override** windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.
+1. The **Booting in Legacy** and **Boot Override** windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.
:::image type="content" source="media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot that shows the first Boot Override window.":::
To install the software:
1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
-2. Connect an external CD or disk on the key with the ISO image that you downloaded from the **Updates** page in the Defender for IoT portal.
+1. Connect an external CD or disk on the key with the ISO image that you downloaded from the **Updates** page in the Defender for IoT portal.
-3. Start the appliance.
+1. Start the appliance.
-4. Select **English**.
+1. Select **English**.
:::image type="content" source="media/tutorial-install-components/select-english-screen.png" alt-text="Selection of English in the CLI window.":::
-5. Select **SENSOR-RELEASE-<version> Enterprise**.
+1. Select **SENSOR-RELEASE-<version> Enterprise**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot of the screen for selecting a version.":::
-6. In the Installation Wizard, define the appliance profile and network properties:
+1. In the Installation Wizard, define the appliance profile and network properties:
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
To install the software:
| **Default network parameters (usually the parameters are provided by the customer)** | **management network IP address:** <br/> <br/>**appliance hostname:** <br/>**DNS:** <br/>**the default gateway IP address:**| | **input interfaces:** | The system generates the list of input interfaces for you.<br/><br/>To mirror the input interfaces, copy all the items presented in the list with a comma separator: **eno5, eno3, eno1, eno6, eno4**<br/><br/>**For HPE DL20: Do not list eno1, enp1s0f4u4 (iLo interfaces)**<br/><br/>**BRIDGE**: There's no need to configure the bridge interface. This option is used for special use cases only. Press **Enter** to continue. |
-7. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
+1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
-8. Save the appliance's ID and passwords. You'll need the credentials to access the platform for the first time.
+1. Save the appliance's ID and passwords. You'll need the credentials to access the platform for the first time.
-9. Select **Enter** to continue.
+1. Select **Enter** to continue.
## HPE ProLiant DL360 installation
To install:
1. Sign in to the iLO console, and then right-click the servers' screen.
-2. Select **HTML5 Console**.
+1. Select **HTML5 Console**.
-3. In the console, select the CD icon, and choose the CD/DVD option.
+1. In the console, select the CD icon, and choose the CD/DVD option.
-4. Select **Local ISO file**.
+1. Select **Local ISO file**.
-5. In the dialog box, choose the relevant ISO file.
+1. In the dialog box, choose the relevant ISO file.
-6. Go to the left icon, select **Power**, and the select **Reset**.
+1. Go to the left icon, select **Power**, and the select **Reset**.
-7. The appliance will restart and run the sensor installation process.
+1. The appliance will restart and run the sensor installation process.
### Software installation (HPE DL360)
To install:
1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
-2. Connect an external CD or disk on a key with the ISO image that you downloaded from the **Updates** page in the Defender for IoT portal.
+1. Connect an external CD or disk on a key with the ISO image that you downloaded from the **Updates** page in the Defender for IoT portal.
-3. Start the appliance.
+1. Start the appliance.
-4. Select **English**.
+1. Select **English**.
-5. Select **SENSOR-RELEASE-<version> Enterprise**.
+1. Select **SENSOR-RELEASE-<version> Enterprise**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows selecting the version.":::
-6. In the Installation Wizard, define the appliance profile and network properties.
+1. In the Installation Wizard, define the appliance profile and network properties.
:::image type="content" source="media/tutorial-install-components/installation-wizard-screen-v2.png" alt-text="Screenshot that shows the Installation Wizard.":::
To install:
| **Hardware profile** | Select **corporate**. | | **Management interface** | **eno2** | | **Default network parameters (provided by the customer)** | **management network IP address:** <br/>**subnet mask:** <br/>**appliance hostname:** <br/>**DNS:** <br/>**the default gateway IP address:**|
- | **input interfaces:** | The system generates a list of input interfaces for you.<br/><br/>To mirror the input interfaces, copy all the items presented in the list with a comma separator.<br/><br/>Note that there's no need to configure the bridge interface. This option is used for special use cases only. |
+ | **input interfaces:** | The system generates a list of input interfaces for you.<br/><br/>To mirror the input interfaces, copy all the items presented in the list with a comma separator.<br/><br/> You do not need to configure the bridge interface. This option is used for special use cases only. |
-7. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **support** user.
+1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **support** user.
-8. Save the appliance's ID and passwords. You'll need these credentials to access the platform for the first time.
+1. Save the appliance's ID and passwords. You'll need these credentials to access the platform for the first time.
-9. Select **Enter** to continue.
+1. Select **Enter** to continue.
## Sensor installation for the virtual appliance
Make sure the hypervisor is running.
1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
-2. **Upload** the image and select **Close**.
+1. **Upload** the image and select **Close**.
-3. Go to **Virtual Machines**, and then select **Create/Register VM**.
+1. Go to **Virtual Machines**, and then select **Create/Register VM**.
-4. Select **Create new virtual machine**, and then select **Next**.
+1. Select **Create new virtual machine**, and then select **Next**.
-5. Add a sensor name and choose:
+1. Add a sensor name and choose:
- Compatibility: **&lt;latest ESXi version&gt;**
Make sure the hypervisor is running.
- Guest OS version: **Ubuntu Linux (64-bit)**
-6. Select **Next**.
+1. Select **Next**.
-7. Choose the relevant datastore and select **Next**.
+1. Choose the relevant datastore and select **Next**.
-8. Change the virtual hardware parameters according to the required architecture.
+1. Change the virtual hardware parameters according to the required architecture.
-9. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
+1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
-10. Select **Next** > **Finish**.
+1. Select **Next** > **Finish**.
### Create the virtual machine (Hyper-V)
To create a virtual machine:
1. Create a virtual disk in Hyper-V Manager.
-2. Select **format = VHDX**.
+1. Select **format = VHDX**.
-3. Select **type = Dynamic Expanding**.
+1. Select **type = Dynamic Expanding**.
-4. Enter the name and location for the VHD.
+1. Enter the name and location for the VHD.
-5. Enter the required size (according to the architecture).
+1. Enter the required size (according to the architecture).
-6. Review the summary and select **Finish**.
+1. Review the summary and select **Finish**.
-7. On the **Actions** menu, create a new virtual machine.
+1. On the **Actions** menu, create a new virtual machine.
-8. Enter a name for the virtual machine.
+1. Enter a name for the virtual machine.
-9. Select **Specify Generation** > **Generation 1**.
+1. Select **Specify Generation** > **Generation 1**.
-10. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.
+1. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.
-11. Configure the network adaptor according to your server network topology.
+1. Configure the network adaptor according to your server network topology.
-12. Connect the VHDX created previously to the virtual machine.
+1. Connect the VHDX created previously to the virtual machine.
-13. Review the summary and select **Finish**.
+1. Review the summary and select **Finish**.
-14. Right-click the new virtual machine and select **Settings**.
+1. Right-click the new virtual machine and select **Settings**.
-15. Select **Add Hardware** and add a new network adapter.
+1. Select **Add Hardware** and add a new network adapter.
-16. Select the virtual switch that will connect to the sensor management network.
+1. Select the virtual switch that will connect to the sensor management network.
-17. Allocate CPU resources (according to the architecture).
+1. Allocate CPU resources (according to the architecture).
-18. Connect the management console's ISO image to a virtual DVD drive.
+1. Connect the management console's ISO image to a virtual DVD drive.
-19. Start the virtual machine.
+1. Start the virtual machine.
-20. On the **Actions** menu, select **Connect** to continue the software installation.
+2. On the **Actions** menu, select **Connect** to continue the software installation.
### Software installation (ESXi and Hyper-V)
To install:
1. Open the virtual machine console.
-2. The VM will start from the ISO image, and the language selection screen will appear. Select **English**.
+1. The VM will start from the ISO image, and the language selection screen will appear. Select **English**.
-3. Select the required architecture.
+1. Select the required architecture.
-4. Define the appliance profile and network properties:
+1. Define the appliance profile and network properties:
| Parameter | Configuration | | -| - |
To install:
| **Network parameters (provided by the customer)** | **management network IP address:** <br/>**subnet mask:** <br/>**appliance hostname:** <br/>**DNS:** <br/>**default gateway:** <br/>**input interfaces:**| | **bridge interfaces:** | There's no need to configure the bridge interface. This option is for special use cases only. |
-5. Enter **Y** to accept the settings.
+1. Enter **Y** to accept the settings.
-6. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required for sign-in and administration.
+1. Sign-in credentials are automatically generated and presented. Copy the username and password in a safe place, because they're required for sign-in and administration.
- - **Support**: The administrative user for user management.
+ - **Support**: The administrative user for user management.
- - **CyberX**: The equivalent of root for accessing the appliance.
+ - **CyberX**: The equivalent of root for accessing the appliance.
-7. The appliance restarts.
+1. The appliance restarts.
-8. Access the management console via the IP address previously configured: `https://ip_address`.
+1. Access the management console via the IP address previously configured: `https://ip_address`.
:::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to the management console.":::
To a create virtual machine (ESXi):
1. Sign in to the ESXi, choose the relevant **datastore**, and select **Datastore Browser**.
-2. Upload the image and select **Close**.
+1. Upload the image and select **Close**.
-3. Go to **Virtual Machines**.
+1. Go to **Virtual Machines**.
-4. Select **Create/Register VM**.
+1. Select **Create/Register VM**.
-5. Select **Create new virtual machine** and select **Next**.
+1. Select **Create new virtual machine** and select **Next**.
-6. Add a sensor name and choose:
+1. Add a sensor name and choose:
- Compatibility: \<latest ESXi version>
To a create virtual machine (ESXi):
- Guest OS version: Ubuntu Linux (64-bit)
-7. Select **Next**.
+1. Select **Next**.
-8. Choose relevant datastore and select **Next**.
+1. Choose relevant datastore and select **Next**.
-9. Change the virtual hardware parameters according to the required architecture.
+1. Change the virtual hardware parameters according to the required architecture.
-10. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
+1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
-11. Select **Next** > **Finish**.
+1. Select **Next** > **Finish**.
### Create the virtual machine (Hyper-V)
To create a virtual machine by using Hyper-V:
1. Create a virtual disk in Hyper-V Manager.
-2. Select the format **VHDX**.
+1. Select the format **VHDX**.
-3. Select **Next**.
+1. Select **Next**.
-4. Select the type **Dynamic expanding**.
+1. Select the type **Dynamic expanding**.
-5. Select **Next**.
+1. Select **Next**.
-6. Enter the name and location for the VHD.
+1. Enter the name and location for the VHD.
-7. Select **Next**.
+1. Select **Next**.
-8. Enter the required size (according to the architecture).
+1. Enter the required size (according to the architecture).
-9. Select **Next**.
+1. Select **Next**.
-10. Review the summary and select **Finish**.
+1. Review the summary and select **Finish**.
-11. On the **Actions** menu, create a new virtual machine.
+1. On the **Actions** menu, create a new virtual machine.
-12. Select **Next**.
+1. Select **Next**.
-13. Enter a name for the virtual machine.
+1. Enter a name for the virtual machine.
-14. Select **Next**.
+1. Select **Next**.
-15. Select **Generation** and set it to **Generation 1**.
+1. Select **Generation** and set it to **Generation 1**.
-16. Select **Next**.
+1. Select **Next**.
-17. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.
+1. Specify the memory allocation (according to the architecture) and select the check box for dynamic memory.
-18. Select **Next**.
+1. Select **Next**.
-19. Configure the network adaptor according to your server network topology.
+1. Configure the network adaptor according to your server network topology.
-20. Select **Next**.
+1. Select **Next**.
-21. Connect the VHDX created previously to the virtual machine.
+1. Connect the VHDX created previously to the virtual machine.
-22. Select **Next**.
+1. Select **Next**.
-23. Review the summary and select **Finish**.
+1. Review the summary and select **Finish**.
-24. Right-click the new virtual machine, and then select **Settings**.
+1. Right-click the new virtual machine, and then select **Settings**.
-25. Select **Add Hardware** and add a new adapter for **Network Adapter**.
+1. Select **Add Hardware** and add a new adapter for **Network Adapter**.
-26. For **Virtual Switch**, select the switch that will connect to the sensor management network.
+1. For **Virtual Switch**, select the switch that will connect to the sensor management network.
-27. Allocate CPU resources (according to the architecture).
+1. Allocate CPU resources (according to the architecture).
-28. Connect the management console's ISO image to a virtual DVD drive.
+1. Connect the management console's ISO image to a virtual DVD drive.
-29. Start the virtual machine.
+1. Start the virtual machine.
-30. On the **Actions** menu, select **Connect** to continue the software installation.
+1. On the **Actions** menu, select **Connect** to continue the software installation.
### Software installation (ESXi and Hyper-V)
-Starting the virtual machine will start the installation process from the ISO image.
+Starting the virtual machine will start the installation process from the ISO image. To enhance security, you can create a second network interface on your on-premises management console. One network interface is dedicated for your users, and can support the configuration of a gateway for routed networks. The second network interface is dedicated to the all attached sensors within an IP address range.
+
+Both network interfaces have the user interface (UI) enabled, and all of the features that are supported by the UI will be available on the secondary network interface when routing in not needed. High Availability will run on the secondary network interface.
+
+If you choose not to deploy a secondary network interface, all of the features will be available through the primary network interface.
To install the software: 1. Select **English**.
-2. Select the required architecture for your deployment.
+1. Select the required architecture for your deployment.
-3. Define the network interface for the sensor management network: interface, IP, subnet, DNS server, and default gateway.
+1. Define the network interface for the sensor management network: interface, IP, subnet, DNS server, and default gateway.
+
+1. (Optional) Add a second network interface to your on-premises management console.
+
+ 1. `Please type sensor monitoring interface (Optional. Applicable when sensors are on a different network segment. For more information see the Installation instructions): <name of interface>`
+
+ 1. `Please type an IP address for the sensor monitoring interface (accessible by the sensors): <ip address>`
+
+ 1. `Please type a subnet mask for the sensor monitoring interface (accessible by the sensors): <subnet>`
-4. Sign-in credentials are automatically generated and presented. Keep these credentials in a safe place, because they're required for sign-in and administration.
+1. Sign-in credentials are automatically generated and presented. Keep these credentials in a safe place, because they're required for sign-in and administration.
- **Support**: The administrative user for user management. - **CyberX**: The equivalent of root for accessing the appliance.
-5. The appliance restarts.
+1. The appliance restarts.
-6. Access the management console via the IP address previously configured: `<https://ip_address>`.
+1. Access the management console via the IP address previously configured: `<https://ip_address>`.
:::image type="content" source="media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png" alt-text="Screenshot that shows the management console's sign-in screen."::: ## Post-installation validation
-To validate the installation of a physical appliance, you need to perform a number of tests. The same validation process applies to all the appliance types.
+To validate the installation of a physical appliance, you need to perform many tests. The same validation process applies to all the appliance types.
Perform the validation by using the GUI or the CLI. The validation is available to the user **Support** and the user **CyberX**.
Post-installation validation must include the following tests:
- The size of the backup folder - The limitations of the backup folder - When the last backup happened
- - How much space there is for the additional backup files
+ - How much space there is for the extra backup files
- **ifconfig**: Displays the parameters for the appliance's physical interfaces.
To access the tool:
1. Sign in to the sensor with the **Support** user credentials.
-2. Select **System Statistics** from the **System Settings** window.
+1. Select **System Statistics** from the **System Settings** window.
:::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false":::
Verify that the system is up and running:
1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
-2. Enter `system sanity`.
+1. Enter `system sanity`.
-3. Check that all the services are green (running).
+1. Check that all the services are green (running).
:::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Screenshot that shows running services.":::
-4. Verify that **System is UP! (prod)** appears at the bottom.
+1. Verify that **System is UP! (prod)** appears at the bottom.
**Test 2: Version check**
Verify that the correct version is used:
1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
-2. Enter `system version`.
+1. Enter `system version`.
-3. Check that the correct version appears.
+1. Check that the correct version appears.
**Test 3: Network validation**
Verify that all the input interfaces configured during the installation process
1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
-2. Enter `network list` (the equivalent of the Linux command `ifconfig`).
+1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
-3. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.
+1. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.
:::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot that shows the list of interfaces.":::
Verify that you can access the console web GUI:
1. Connect a laptop with an Ethernet cable to the management port (**Gb1**).
-2. Define the laptop NIC address to be in the same range as the appliance.
+1. Define the laptop NIC address to be in the same range as the appliance.
:::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot that shows management access to the UI.":::
-3. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).
+1. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).
-4. Open the Chrome browser in the laptop and enter the appliance's IP address.
+1. Open the Chrome browser in the laptop and enter the appliance's IP address.
-5. In the **Your connection is not private** window, select **Advanced** and proceed.
+1. In the **Your connection is not private** window, select **Advanced** and proceed.
-6. The test is successful when the Defender for IoT sign-in screen appears.
+1. The test is successful when the Defender for IoT sign-in screen appears.
:::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to management console.":::
Verify that you can access the console web GUI:
1. Verify that the computer that you're trying to connect is on the same network as the appliance.
-2. Verify that the GUI network is connected to the management port.
+1. Verify that the GUI network is connected to the management port.
-3. Ping the appliance's IP address. If there is no ping:
+1. Ping the appliance's IP address. If there is no ping:
1. Connect a monitor and a keyboard to the appliance.
Verify that you can access the console web GUI:
:::image type="content" source="media/tutorial-install-components/network-list.png" alt-text="Screenshot that shows the network list.":::
-4. If the network parameters are misconfigured, use the following procedure to change them:
+1. If the network parameters are misconfigured, use the following procedure to change them:
1. Use the command `network edit-settings`.
Verify that you can access the console web GUI:
1. To apply the settings, select **Y**.
-5. After restart, connect with the support user credentials and use the `network list` command to verify that the parameters were changed.
+1. After restart, connect with the support user credentials and use the `network list` command to verify that the parameters were changed.
-6. Try to ping and connect from the GUI again.
+1. Try to ping and connect from the GUI again.
### The appliance isn't responding 1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-2. Use the **Support** user's credentials to sign in.
+1. Use the **Support** user's credentials to sign in.
-3. Use the `system sanity` command and check that all processes are running.
+1. Use the `system sanity` command and check that all processes are running.
:::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot that shows the system sanity command.":::
For any other issues, contact [Microsoft Support](https://support.microsoft.com/
### Configure a SPAN port on an existing vSwitch
-A vSwitch does not have mirroring capabilities, but you can use a simple workaround to implement a SPAN port.
+A vSwitch does not have mirroring capabilities, but you can use a workaround to implement a SPAN port.
To configure a SPAN port: 1. Open vSwitch properties.
-2. Select **Add**.
+1. Select **Add**.
-3. Select **Virtual Machine** > **Next**.
+1. Select **Virtual Machine** > **Next**.
-4. Insert a network label **SPAN Network**, select **VLAN ID** > **All**, and then select **Next**.
+1. Insert a network label **SPAN Network**, select **VLAN ID** > **All**, and then select **Next**.
-5. Select **Finish**.
+1. Select **Finish**.
-6. Select **SPAN Network** > **Edit*.
+1. Select **SPAN Network** > **Edit*.
-7. Select **Security**, and verify that the **Promiscuous Mode** policy is set to **Accept** mode.
+1. Select **Security**, and verify that the **Promiscuous Mode** policy is set to **Accept** mode.
-8. Select **OK**, and then select **Close** to close the vSwitch properties.
+1. Select **OK**, and then select **Close** to close the vSwitch properties.
-9. Open the **XSense VM** properties.
+1. Open the **XSense VM** properties.
-10. For **Network Adapter 2**, select the **SPAN** network.
+1. For **Network Adapter 2**, select the **SPAN** network.
-11. Select **OK**.
+1. Select **OK**.
-12. Connect to the sensor and verify that mirroring works.
+1. Connect to the sensor and verify that mirroring works.
## Appendix B: Access sensors from the on-premises management console
To enable tunneling:
1. Sign in to the on-premises management console's CLI with **CyberX** or **Support** user credentials.
-2. Enter `sudo cyberx-management-tunnel-enable`.
+1. Enter `sudo cyberx-management-tunnel-enable`.
-3. Select **Enter**.
+1. Select **Enter**.
-4. Enter `--port 10000`.
+1. Enter `--port 10000`.
### Next steps
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
description: Troubleshoot your sensor and on-premises management console to elim
Previously updated : 1/3/2021 Last updated : 03/14/2021
To recover your password:
1. Select **Next**, and your user, and system-generated password for your management console will then appear. > [!NOTE]
- > When you sign in to a sensor or on-premise management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX or Support user you will need to select that subscription. For more information on recovering a CyberX or Support user password, see [Resetting a user's password for the sensor or on-premises management console](how-to-create-and-manage-users.md#resetting-a-users-password-for-the-sensor-or-on-premises-management-console)
+ > When you sign in to a sensor or on-premise management console for the first time it will be linked to the subscription you connected it to. If you need to reset the password for the CyberX or Support user you will need to select that subscription. For more information on recovering a CyberX or Support user password, see [Resetting passwords](how-to-create-and-manage-users.md#resetting-passwords).
### Investigate a lack of traffic
defender-for-iot Iot Security Azure Rtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/iot-security-azure-rtos.md
Title: Security module for Azure RTOS overview
-description: Learn more about the security module for Azure RTOS support and implementation as part of Azure Defender for IoT.
+ Title: Defender-IoT-micro-agent for Azure RTOS overview
+description: Learn more about the Defender-IoT-micro-agent for Azure RTOS support and implementation as part of Azure Defender for IoT.
documentationcenter: na
Last updated 01/14/2021
-# Overview: Defender for IoT security module for Azure RTOS (preview)
+# Overview: Defender for IoT Defender-IoT-micro-agent for Azure RTOS (preview)
-The Azure Defender for IoT micro module provides a comprehensive security solution for devices that use Azure RTOS. It provides coverage for common threats and potential malicious activities on real-time operating system (RTOS) devices. Azure RTOS now ships with the Azure IoT security module built in.
+The Azure Defender for IoT micro module provides a comprehensive security solution for devices that use Azure RTOS. It provides coverage for common threats and potential malicious activities on real-time operating system (RTOS) devices. Azure RTOS now ships with the Azure IoT Defender-IoT-micro-agent built in.
:::image type="content" source="./media/architecture/azure-rtos-security-monitoring.png" alt-text="Visualization of Defender for IoT Azure RTOS.":::
By using the recommended infrastructure Defender for IoT provides, you can gain
## Get started protecting Azure RTOS devices
-Security Module for Azure RTOS is provided as a free download for your devices. The Defender for IoT cloud service is available with a 30-day trial per Azure subscription. To get started, download the [security module for Azure RTOS](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/defender-for-iot/iot-security-azure-rtos.md).
+Defender-IoT-micro-agent for Azure RTOS is provided as a free download for your devices. The Defender for IoT cloud service is available with a 30-day trial per Azure subscription. To get started, download the [Defender-IoT-micro-agent for Azure RTOS](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/defender-for-iot/iot-security-azure-rtos.md).
## Next steps
-In this article, you learned about the security module for Azure RTOS. To learn more about the security module and get started, see the following articles:
+In this article, you learned about the Defender-IoT-micro-agent for Azure RTOS. To learn more about the Defender-IoT-micro-agent and get started, see the following articles:
-- [Azure RTOS IoT security module concepts](concept-rtos-security-module.md)-- [Quickstart: Azure RTOS IoT security module](quickstart-azure-rtos-security-module.md)
+- [Azure RTOS IoT Defender-IoT-micro-agent concepts](concept-rtos-security-module.md)
+- [Quickstart: Azure RTOS IoT Defender-IoT-micro-agent](quickstart-azure-rtos-security-module.md)
defender-for-iot Overview Security Agents https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/overview-security-agents.md
Use the following workflow to deploy and test your Defender for IoT security age
## Next steps - Configure your [solution](quickstart-configure-your-solution.md)-- [Create security modules](quickstart-create-security-twin.md)
+- [Create Defender-IoT-micro-agents](quickstart-create-security-twin.md)
- Configure [custom alerts](quickstart-create-custom-alerts.md) - [Deploy a security agent](how-to-deploy-agent.md)
defender-for-iot Quickstart Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-azure-rtos-security-module.md
Title: "Quickstart: Configure and enable the Security Module for Azure RTOS"
-description: In this quickstart you will learn how to onboard and enable the Security Module for Azure RTOS service in your Azure IoT Hub.
+ Title: "Quickstart: Configure and enable the Defender-IoT-micro-agent for Azure RTOS"
+description: Learn how to onboard and enable the Defender-IoT-micro-agent for Azure RTOS service in your Azure IoT Hub.
documentationcenter: na
Last updated 01/24/2021
-# Quickstart: Security Module for Azure RTOS
-This article provides an explanation of the prerequisites before getting started and explains how to enable the Security Module for Azure RTOS service on an IoT Hub. If you don't currently have an IoT Hub, see [Create an IoT Hub using the Azure portal](../iot-hub/iot-hub-create-through-portal.md) to get started.
+# Quickstart: Defender-IoT-micro-agent for Azure RTOS (preview)
+
+This article provides an explanation of the prerequisites before getting started and explains how to enable the Defender-IoT-micro-agent for Azure RTOS service on an IoT Hub. If you don't currently have an IoT Hub, see [Create an IoT Hub using the Azure portal](../iot-hub/iot-hub-create-through-portal.md) to get started.
## Prerequisites
This article provides an explanation of the prerequisites before getting started
- NXP i.MX RT1060 EVK - Microchip SAM E54 Xplained Pro EVK
-Download, compile, and run one of the .zip files for the specific board and tool (IAR, semi's IDE or PC) of your choice from the [Security Module for Azure RTOS GitHub resource](https://github.com/azure-rtos/azure-iot-preview/releases).
+Download, compile, and run one of the .zip files for the specific board and tool (IAR, semi's IDE or PC) of your choice from the [Defender-IoT-micro-agent for Azure RTOS GitHub resource](https://github.com/azure-rtos/azure-iot-preview/releases).
### Azure resources
An IoT Hub connection is required to get started.
The connections credentials are taken from the user application configuration **HOST_NAME**, **DEVICE_ID**, and **DEVICE_SYMMETRIC_KEY**.
-The Security Module for Azure RTOS uses Azure IoT Middleware connections based on the **MQTT** protocol.
+The Defender-IoT-micro-agent for Azure RTOS uses Azure IoT Middleware connections based on the **MQTT** protocol.
## Next steps Advance to the next article to finish configuring and customizing your solution. > [!div class="nextstepaction"]
-> [Configure Security Module for Azure RTOS](how-to-azure-rtos-security-module.md)
+> [Configure and customize Defender-IoT-micro-agent for Azure RTOS (preview)](how-to-azure-rtos-security-module.md)
defender-for-iot Quickstart Configure Your Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-configure-your-solution.md
Defender for IoT now monitors you're newly added resource groups, and surfaces r
## Next steps
-Advance to the next article to learn how to create security modules...
+Advance to the next article to learn how to create Defender-IoT-micro-agents...
> [!div class="nextstepaction"]
-> [Create security modules](quickstart-create-security-twin.md)
+> [Create Defender-IoT-micro-agents](quickstart-create-security-twin.md)
defender-for-iot Quickstart Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-micro-agent-module-twin.md
Defender for IoT has the ability to fully integrate with your existing IoT devic
Learn more about the concept of [device twins](../iot-hub/iot-hub-devguide-device-twins.md) in Azure IoT Hub.
-## Security module twins
+## Defender-IoT-micro-agent twins
-Defender for IoT uses a security module twin for each device. The security module twin holds all of the information that is relevant to device security, for each specific device in your solution. Device security properties are configured through a dedicated security module twin for safer communication, to enable updates, and maintenance that requires fewer resources.
+Defender for IoT uses a Defender-IoT-micro-agent twin for each device. The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. Device security properties are configured through a dedicated Defender-IoT-micro-agent twin for safer communication, to enable updates, and maintenance that requires fewer resources.
## Understanding DefenderIotMicroAgent module twins
Defender for IoT offers the capability to fully integrate your existing IoT devi
To learn more about the general concept of module twins in Azure IoT Hub, seeΓÇ»[IoT Hub module twins](../iot-hub/iot-hub-devguide-module-twins.md).
-Defender for IoT uses the module twin mechanism, and maintains a security module twin named `DefenderIotMicroAgent` for each of your devices.
+Defender for IoT uses the module twin mechanism, and maintains a Defender-IoT-micro-agent twin named `DefenderIotMicroAgent` for each of your devices.
-To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the security module twins for every device in the service.
+To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
## Create DefenderIotMicroAgent module twin
To take full advantage of all Defender for IoT feature's, you need to create, co
To manually create a newΓÇ»**DefenderIotMicroAgent** module twin for a device:
-1. In your IoT Hub, locate and select the device on which to create a security module twin.
+1. In your IoT Hub, locate and select the device on which to create a Defender-IoT-micro-agent twin.
1. SelectΓÇ»**Add module identity**.
To manually create a newΓÇ»**DefenderIotMicroAgent** module twin for a device:
## Verify the creation of a module twin
-To verify if a security module twin exists for a specific device:
+To verify if a Defender-IoT-micro-agent twin exists for a specific device:
1. In your Azure IoT Hub, select **IoT devices** from the **Explorers** menu.
defender-for-iot Quickstart Create Security Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-create-security-twin.md
See [IoT Hub module twins](../iot-hub/iot-hub-devguide-module-twins.md) to learn
Defender for IoT makes use of the module twin mechanism and maintains a security module twin named _azureiotsecurity_ for each of your devices.
-The security module twin holds all the information relevant to device security for each of your devices.
+The Defender-IoT-micro-agent twin holds all the information relevant to device security for each of your devices.
-To make full use of Defender for IoT features, you'll need to create, configure, and use this security module twins for every device in the service.
+To make full use of Defender for IoT features, you'll need to create, configure, and use this Defender-IoT-micro-agent twins for every device in the service.
## Create azureiotsecurity module twin
defender-for-iot Quickstart System Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-system-prerequisites.md
This article lists the system prerequisites for running Azure Defender for IoT.
- Hardware appliances for NTA sensors. - The Azure Subscription Contributor role. It's required only during onboarding for defining committed devices and connection to Azure Sentinel. - Azure IoT Hub (Free or Standard tier) **Contributor** role, for cloud-connected management. Make sure that the **Azure Defender for IoT** feature is enabled.-- For device-level security module support, Defender for IoT agents support a growing list of devices and platforms. See the [list of supported platforms](how-to-deploy-agent.md).
+- For device-level Defender-IoT-micro-agent support, Defender for IoT agents support a growing list of devices and platforms. See the [list of supported platforms](how-to-deploy-agent.md).
## Supported service regions
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/references-defender-for-iot-glossary.md
This glossary provides a brief description of important terms and concepts for t
| **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **[Group](#g)** <br /><br />**[Device inventory- on-premises management console](#d)** | | **Device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **[Device inventory - sensor](#d)<br /><br />[Device inventory - data integrator](#d)** | | **Device inventory - data integrator** | The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs. | **[Device inventory - on-premises management console](#d)** |
-| **Device twins** `(DB)` | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m) <br /> <br />[Security module twin](#s) |
+| **Device twins** `(DB)` | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m) <br /> <br />[Defender-IoT-micro-agent twin](#s) |
## E
This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--| | **Micro Agent** `(DB)` | Provides depth security capabilities for IoT devices including security posture and threat detection. | |
-| **Module twin** `(DB)` | Module twins are JSON documents that store module state information including metadata, configurations, and conditions. | [Device twin](#d) <br /> <br />[Security module twin](#s) |
+| **Module twin** `(DB)` | Module twins are JSON documents that store module state information including metadata, configurations, and conditions. | [Device twin](#d) <br /> <br />[Defender-IoT-micro-agent twin](#s) |
| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **[Alert](#glossary-a)<br /><br />[Exclusion rule](#e)<br /><br />[Acknowledge alert event](#glossary-a)<br /><br />[Learn alert event](#l)** | ## N
This glossary provides a brief description of important terms and concepts for t
| Term | Description | Learn more | |--|--|--| | **Security alert** | Alerts that deal with security issues, such as excessive SMB sign in attempts or malware detections. | **[Alert](#glossary-a)<br /><br />[Operational alert](#o)** |
-| **Security module twin** `(DB)` | The security module twin holds all of the information that is relevant to device security, for each specific device in your solution. | [Device twin](#d) <br /> <br />[Module Twin](#m) |
+| **Defender-IoT-micro-agent twin** `(DB)` | The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. | [Device twin](#d) <br /> <br />[Module Twin](#m) |
| **Selective probing** | Defender for IoT passively inspects IT and OT traffic and detects relevant information on devices, their attributes, their behavior, and more. In certain cases, some information might not be visible in passive network analyses.<br /><br />When this happens, you can use the safe, granular probing tools in Defender for IoT to discover important information on previously unreachable devices. | - | | **Sensor** | The physical or virtual machine on which the Defender for IoT platform is installed. | **[On-premises management console](#o)** | | **Site** | A location that a factory or other entity. The site should contain a zone or several zones in which a sensor is installed. | **[Zone](#z)** |
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
documentationcenter: na
editor: ''- ms.devlang: na na Previously updated : 02/08/2021 Last updated : 03/14/2021
This article lists new features and feature enhancements for Defender for IoT. Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## February 2021
+## March 2021
-### Sensor - enhanced custom alert rules
+### Sensor - enhanced custom alert rules (Public preview)
You can now create custom alert rules based on the day, group of days and time-period network activity was detected. Working with day and time rule conditions is useful, for example