Updates from: 03/11/2021 04:08:43
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
To use an [API connector](api-connectors-overview.md), you first create the API
5. Provide a display name for the call. For example, **Validate user information**. 6. Provide the **Endpoint URL** for the API call.
-7. Provide the authentication information for the API.
+7. Choose the **Authentication type** and configure the authentication information for calling your API. See the section below for options on securing your API.
- - Only Basic Authentication is currently supported. If you wish to use an API without Basic Authentication for development purposes, simply enter a 'dummy' **Username** and **Password** that your API can ignore. For use with an Azure Function with an API key, you can include the code as a query parameter in the **Endpoint URL** (for example, `https://contoso.azurewebsites.net/api/endpoint?code=0123456789`).
+ ![Configure an API connector](./media/add-api-connector/api-connector-config.png)
- ![Configure a new API connector](./media/add-api-connector/api-connector-config.png)
8. Select **Save**.
+## Securing the API endpoint
+You can protect your API endpoint by using either HTTP basic authentication or HTTPS client certificate authentication (preview). In either case, you provide the credentials that Azure AD B2C will use when calling your API endpoint. Your API endpoint then checks the credentials and performs authorization decisions.
+
+### HTTP basic authentication
+HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Azure AD B2C sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then checks these values to determine whether to reject an API call or not.
+
+### HTTPS client certificate authentication (preview)
+
+> [!IMPORTANT]
+> This functionality is in preview and is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Client certificate authentication is a mutual certificate-based authentication, where the client provides a client certificate to the server to prove its identity. In this case, Azure AD B2C will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Only services that have proper certificates can access your REST API service. The client certificate is an X.509 digital certificate. In production environments, it should be signed by a certificate authority.
++
+To create a certificate, you can use [Azure Key Vault](../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. You can then [export the certificate](../key-vault/certificates/how-to-export-certificate.md) and upload it for use in the API connectors configuration. Note that password is only required for certificate files protected by a password. You can also use PowerShell's [New-SelfSignedCertificate cmdlet](./secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
+
+For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and validate the certificate from your API endpoint.
+
+It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **API connectors (preview)** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure AD B2C.
+
+### API Key
+Some services use an "API key" mechanism to make it harder to access your HTTP endpoints during development. For [Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
+
+This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
+ ## The request sent to your API An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties.
Content-type: application/json
Only user properties and custom attributes listed in the **Azure AD B2C** > **User attributes** experience are available to be sent in the request.
-Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md).
+Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure AD B2C](user-flow-custom-attributes.md).
Additionally, the **UI Locales ('ui_locales')** claim is sent by default in all requests. It provides a user's locale(s) as configured on their device that can be used by the API to return internationalized responses.
See an example of a [blocking response](#example-of-a-blocking-response).
An API connector at this step in the sign-up process is invoked after the attribute collection page, if one is included. This step is always invoked before a user account is created.
-<!-- The following are examples of scenarios you might enable at this point during sign-up: -->
-<!--
-- Validate user input data and ask a user to resubmit data.-- Block a user sign-up based on data entered by the user.-- Perform identity verification.-- Query external systems for existing data about the user and overwrite the user-provided value. -->- ### Example request sent to the API at this step ```http
Content-type: application/json
| Parameter | Type | Required | Description | | -- | -- | -- | -- |
-| version | String | Yes | The version of the API. |
| action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | Returned values can overwrite values collected from a user. They can also be returned in the token if selected as an **Application claim**. | | \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`. Returned values can overwrite values collected from a user. They can also be returned in the token if selected as an **Application claim**. |
Content-type: application/json
### Example of a validation-error response -- ```http HTTP/1.1 400 Bad Request Content-type: application/json
Content-type: application/json
| Parameter | Type | Required | Description | | -- | - | -- | -- |
-| version | String | Yes | The version of the API. |
+| version | String | Yes | The version of your API. |
| action | String | Yes | Value must be `ValidationError`. | | status | Integer | Yes | Must be value `400` for a ValidationError response. | | userMessage | String | Yes | Message to display to the user. |
Ensure that:
* Your API explicitly checks for null values of received claims. * Your API responds as quickly as possible to ensure a fluid user experience. * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." in production. For Azure Functions, its recommended to use the [Premium plan](../azure-functions/functions-scale.md)-
+
### Use logging In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance.
In general, it's helpful to use the logging tools enabled by your web API servic
* Monitor your API for long response times. ## Next steps
-<!-
- Get started with our [samples](code-samples.md#api-connectors).
active-directory-b2c Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/api-connectors-overview.md
> API connectors for sign-up is a public preview feature of Azure AD B2C. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Overview
-As a developer or IT administrator, you can use API connectors to integrate your sign-up user flows with web APIs to customize the sign-up experience. For example, with API connectors, you can:
+As a developer or IT administrator, you can use API connectors to integrate your sign-up user flows with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can:
- **Validate user input data**. Validate against malformed or invalid user data. For example, you can validate user-provided data against existing data in an external data store or list of permitted values. If invalid, you can ask a user to provide valid data or block the user from continuing the sign-up flow. - **Integrate with a custom approval workflow**. Connect to a custom approval system for managing and limiting account creation.
As a developer or IT administrator, you can use API connectors to integrate your
- **Perform identity verification**. Use an identity verification service to add an extra level of security to account creation decisions. - **Run custom business logic**. You can trigger downstream events in your cloud systems to send push notifications, update corporate databases, manage permissions, audit databases, and perform other custom actions.
-An API connector provides Azure Active Directory with the information needed to call an API including an endpoint URL and authentication. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to re-enter information, or overwrite and append user attributes.
+An API connector provides Azure Active Directory with the information needed to call API endpoint by defining the HTTP endpoint URL and authentication for the API call. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to re-enter information, or overwrite and append user attributes.
## Where you can enable an API connector in a user flow
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 02/19/2019 Last updated : 03/10/2021
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on will not take effect. | | code_challenge | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - native apps, SPAs, and confidential clients like web apps. | | `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](tutorial-register-spa.md).|
+| login_hint | No| Can be used to pre-fill the sign-in name field of the sign-in page. For more information, see [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name). |
+| domain_hint | No| Provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. If a valid value is included, the user goes directly to the identity provider sign-in page. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider). |
+| Custom parameters | No| Custom parameters that can be used with [custom policies](custom-policy-overview.md). For example, [dynamic custom page content URI](customize-ui-with-html.md?pivots=b2c-custom-policy#configure-dynamic-custom-page-content-uri), or [key-value claim resolvers](claim-resolver-overview.md#oauth2-key-value-parameters). |
At this point, the user is asked to complete the user flow's workflow. This might involve the user entering their username and password, signing in with a social identity, signing up for the directory, or any other number of steps. User actions depend on how the user flow is defined.
active-directory-b2c Azure Ad External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-ad-external-identities-videos.md
Get a deeper view into the features and technical aspects of the Azure AD B2C se
| Video title | | Video title | | |:|:|:|:|
-|[Azure AD B2C sign-up sign-in](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6&t=2s) 10:25 | [![image](./media/external-identities-videos/customer-sign-up-sign-in.png)](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6) | [Azure AD B2C single sign on and self service password reset](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) 8:40 | [![image](./media/external-identities-videos/single-sign-on.png)](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) |
-| [Application and identity migration to Azure AD B2C](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) 10:34 | [![image](./media/external-identities-videos/identity-migration-aad-b2c.png)](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) | [Build resilient and scalable flows using Azure AD B2C](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) 16:47 | [![image](./media/external-identities-videos/b2c-scalable-flows.png)](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) |
-| [Building a custom CIAM solution with Azure AD B2C and ISV alliances](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) 10:01 | [![image](./media/external-identities-videos/build-custom-b2c-solution.png)](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) | [Protecting Web APIs with Azure AD B2C](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) 19:03 | [![image](./media/external-identities-videos/protecting-web-apis.png)](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) |
-| [Integration of SAML with Azure AD B2C](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) 9:09 | [![image](./media/external-identities-videos/saml-integration.png)](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) |
+|[Azure AD B2C sign-up sign-in](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6&t=2s) 10:25 | [:::image type="icon" source="./media/external-identities-videos/customer-sign-up-sign-in.png" border="false":::](https://www.youtube.com/watch?v=c8rN1ZaR7wk&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=6) | [Azure AD B2C single sign on and self service password reset](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) 8:40 | [:::image type="icon" source="./media/external-identities-videos/single-sign-on.png" border="false":::](https://www.youtube.com/watch?v=kRV-7PSLK38&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=7) |
+| [Application and identity migration to Azure AD B2C](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) 10:34 | [:::image type="icon" source="./media/external-identities-videos/identity-migration-aad-b2c.png" border="false":::](https://www.youtube.com/watch?v=Xw_YwSJmhIQ&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=9) | [Build resilient and scalable flows using Azure AD B2C](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) 16:47 | [:::image type="icon" source="./media/external-identities-videos/b2c-scalable-flows.png" border="false":::](https://www.youtube.com/watch?v=8f_Ozpw9yTs&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=12) |
+| [Building a custom CIAM solution with Azure AD B2C and ISV alliances](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) 10:01 | [:::image type="icon" source="./media/external-identities-videos/build-custom-b2c-solution.png" border="false":::](https://www.youtube.com/watch?v=UZjiGDD0wa8&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=8) | [Protecting Web APIs with Azure AD B2C](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) 19:03 | [:::image type="icon" source="./media/external-identities-videos/protecting-web-apis.png" border="false":::](https://www.youtube.com/watch?v=wuUu71RcsIo&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=10) |
+| [Integration of SAML with Azure AD B2C](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) 9:09 | [:::image type="icon" source="./media/external-identities-videos/saml-integration.png" border="false":::](https://www.youtube.com/watch?v=r2TIVBCm7v4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=11) |
## Azure Active Directory B2C how to series
Learn how to perform various use cases in Azure AD B2C.
| Video title | | Video title | | |:|:|:|:|
-|[Azure AD: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) 6:57 | [![image](./media/external-identities-videos/monitoring-reporting-aad-b2c.png)](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) | [Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) 7:09 | [![image](./media/external-identities-videos/user-migration-msgraph-api.png)](https://www.youtube.com/watch?v=9BRXBtkBzL4list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) |
-| [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) 8:22 | [![image](./media/external-identities-videos/user-migration-stratagies.png)](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) | [How to localize or customize language using Azure AD B2C](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 20:41 | [![image](./media/external-identities-videos/language-localization.png)](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) |
+|[Azure AD: Monitoring and reporting Azure AD B2C using Azure Monitor](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) 6:57 | [:::image type="icon" source="./media/external-identities-videos/monitoring-reporting-aad-b2c.png" border="false":::](https://www.youtube.com/watch?v=Mu9GQy-CbXI&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=1) | [Azure AD B2C user migration using Microsoft Graph API](https://www.youtube.com/watch?v=9BRXBtkBzL4&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) 7:09 | [:::image type="icon" source="./media/external-identities-videos/user-migration-msgraph-api.png" border="false":::](https://www.youtube.com/watch?v=9BRXBtkBzL4list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=5) |
+| [Azure AD B2C user migration strategies](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) 8:22 | [:::image type="icon" source="./media/external-identities-videos/user-migration-stratagies.png" border="false":::](https://www.youtube.com/watch?v=lCWR6PGUgz0&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=2) | [How to localize or customize language using Azure AD B2C](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) 20:41 | [:::image type="icon" source="./media/external-identities-videos/language-localization.png" border="false":::](https://www.youtube.com/watch?v=yqrX5_tA7Ms&list=PL3ZTgFEc7LyuJ8YRSGXBUVItCPnQz3YX0&index=13) |
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-user-input.md
Previously updated : 03/04/2021 Last updated : 03/10/2021
Open the extensions file of your policy. For example, <em>`SocialAndLocalAccount
1. Add the city claim to the **ClaimsSchema** element. ```xml
-<ClaimType Id="city">
- <DisplayName>City where you work</DisplayName>
- <DataType>string</DataType>
- <UserInputType>DropdownSingleSelect</UserInputType>
- <Restriction>
- <Enumeration Text="Bellevue" Value="bellevue" SelectByDefault="false" />
- <Enumeration Text="Redmond" Value="redmond" SelectByDefault="false" />
- <Enumeration Text="Kirkland" Value="kirkland" SelectByDefault="false" />
- </Restriction>
-</ClaimType>
+<!--
+<BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="city">
+ <DisplayName>City where you work</DisplayName>
+ <DataType>string</DataType>
+ <UserInputType>DropdownSingleSelect</UserInputType>
+ <Restriction>
+ <Enumeration Text="Bellevue" Value="bellevue" SelectByDefault="false" />
+ <Enumeration Text="Redmond" Value="redmond" SelectByDefault="false" />
+ <Enumeration Text="Kirkland" Value="kirkland" SelectByDefault="false" />
+ </Restriction>
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+</BuildingBlocks>-->
``` ## Add a claim to the user interface
Override these technical profiles in the extension file. Find the **ClaimsProvid
<PersistedClaim ClaimTypeReferenceId="city"/> </PersistedClaims> </TechnicalProfile>
- <!-- Read data after user authenticates with a local account. -->
+ <!-- Read data after user resets the password. -->
<TechnicalProfile Id="AAD-UserReadUsingEmailAddress"> <OutputClaims> <OutputClaim ClaimTypeReferenceId="city" /> </OutputClaims> </TechnicalProfile>
- <!-- Read data after user authenticates with a federated account. -->
+ <!-- Read data after user authenticates with a local account. -->
<TechnicalProfile Id="AAD-UserReadUsingObjectId"> <OutputClaims> <OutputClaim ClaimTypeReferenceId="city" /> </OutputClaims> </TechnicalProfile>
+ <!-- Read data after user authenticates with a federated account. -->
+ <TechnicalProfile Id="AAD-UserReadUsingAlternativeSecurityId">
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="city" />
+ </OutputClaims>
+ </TechnicalProfile>
</TechnicalProfiles> </ClaimsProvider> ```
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-mailjet.md
Previously updated : 10/15/2020 Last updated : 03/10/2021
In your policy, add the following claim types to the `<ClaimsSchema>` element wi
These claims types are necessary to generate and verify the email address using a one-time password (OTP) code. ```XML
-<ClaimType Id="Otp">
- <DisplayName>Secondary One-time password</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="emailRequestBody">
- <DisplayName>Mailjet request body</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="VerificationCode">
- <DisplayName>Secondary Verification Code</DisplayName>
- <DataType>string</DataType>
- <UserHelpText>Enter your email verification code</UserHelpText>
- <UserInputType>TextBox</UserInputType>
-</ClaimType>
+<!--
+<BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="Otp">
+ <DisplayName>Secondary One-time password</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="emailRequestBody">
+ <DisplayName>Mailjet request body</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="VerificationCode">
+ <DisplayName>Secondary Verification Code</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Enter your email verification code</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+</BuildingBlocks> -->
``` ## Add the claims transformation
Add the following claims transformation to the `<ClaimsTransformations>` element
* Update the value of the `Messages.0.Subject` subject line input parameter with a subject line appropriate for your organization. ```XML
-<ClaimsTransformation Id="GenerateEmailRequestBody" TransformationMethod="GenerateJson">
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="Messages.0.To.0.Email" />
- <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="Messages.0.Variables.otp" />
- <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="Messages.0.Variables.email" />
- </InputClaims>
- <InputParameters>
- <!-- Update the template_id value with the ID of your Mailjet template. -->
- <InputParameter Id="Messages.0.TemplateID" DataType="int" Value="1234567"/>
- <InputParameter Id="Messages.0.TemplateLanguage" DataType="boolean" Value="true"/>
-
- <!-- Update with an email appropriate for your organization. -->
- <InputParameter Id="Messages.0.From.Email" DataType="string" Value="my_email@mydomain.com"/>
-
- <!-- Update with a subject line appropriate for your organization. -->
- <InputParameter Id="Messages.0.Subject" DataType="string" Value="Contoso account email verification code"/>
- </InputParameters>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="emailRequestBody" TransformationClaimType="outputClaim"/>
- </OutputClaims>
-</ClaimsTransformation>
+<!--
+<BuildingBlocks>
+ <ClaimsTransformations> -->
+ <ClaimsTransformation Id="GenerateEmailRequestBody" TransformationMethod="GenerateJson">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="Messages.0.To.0.Email" />
+ <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="Messages.0.Variables.otp" />
+ <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="Messages.0.Variables.email" />
+ </InputClaims>
+ <InputParameters>
+ <!-- Update the template_id value with the ID of your Mailjet template. -->
+ <InputParameter Id="Messages.0.TemplateID" DataType="int" Value="1234567"/>
+ <InputParameter Id="Messages.0.TemplateLanguage" DataType="boolean" Value="true"/>
+
+ <!-- Update with an email appropriate for your organization. -->
+ <InputParameter Id="Messages.0.From.Email" DataType="string" Value="my_email@mydomain.com"/>
+
+ <!-- Update with a subject line appropriate for your organization. -->
+ <InputParameter Id="Messages.0.Subject" DataType="string" Value="Contoso account email verification code"/>
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="emailRequestBody" TransformationClaimType="outputClaim"/>
+ </OutputClaims>
+ </ClaimsTransformation>
+ <!--
+ </ClaimsTransformations>
+</BuildingBlocks> -->
``` ## Add DataUri content definition
Add the following claims transformation to the `<ClaimsTransformations>` element
Below the claims transformations within `<BuildingBlocks>`, add the following [ContentDefinition](contentdefinitions.md) to reference the version 2.1.0 data URI: ```XML
-<ContentDefinitions>
- <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- </ContentDefinition>
-</ContentDefinitions>
+<!--
+<BuildingBlocks> -->
+ <ContentDefinitions>
+ <ContentDefinition Id="api.localaccountsignup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ </ContentDefinition>
+ </ContentDefinitions>
+<!--
+</BuildingBlocks> -->
``` ## Create a DisplayControl
This example display control is configured to:
Under content definitions, still within `<BuildingBlocks>`, add the following [DisplayControl](display-controls.md) of type [VerificationControl](display-control-verification.md) to your policy. ```XML
-<DisplayControls>
- <DisplayControl Id="emailVerificationControl" UserInterfaceControlType="VerificationControl">
- <DisplayClaims>
- <DisplayClaim ClaimTypeReferenceId="email" Required="true" />
- <DisplayClaim ClaimTypeReferenceId="verificationCode" ControlClaimType="VerificationCode" Required="true" />
- </DisplayClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="email" />
- </OutputClaims>
- <Actions>
- <Action Id="SendCode">
- <ValidationClaimsExchange>
- <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="GenerateOtp" />
- <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="SendOtp" />
- </ValidationClaimsExchange>
- </Action>
- <Action Id="VerifyCode">
- <ValidationClaimsExchange>
- <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="VerifyOtp" />
- </ValidationClaimsExchange>
- </Action>
- </Actions>
- </DisplayControl>
-</DisplayControls>
+<!--
+<BuildingBlocks> -->
+ <DisplayControls>
+ <DisplayControl Id="emailVerificationControl" UserInterfaceControlType="VerificationControl">
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="email" Required="true" />
+ <DisplayClaim ClaimTypeReferenceId="verificationCode" ControlClaimType="VerificationCode" Required="true" />
+ </DisplayClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="email" />
+ </OutputClaims>
+ <Actions>
+ <Action Id="SendCode">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="GenerateOtp" />
+ <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="SendOtp" />
+ </ValidationClaimsExchange>
+ </Action>
+ <Action Id="VerifyCode">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="VerifyOtp" />
+ </ValidationClaimsExchange>
+ </Action>
+ </Actions>
+ </DisplayControl>
+ </DisplayControls>
+<!--
+</BuildingBlocks> -->
``` ## Add OTP technical profiles
The `GenerateOtp` technical profile generates a code for the email address. The
Add the following technical profiles to the `<ClaimsProviders>` element. ```XML
-<ClaimsProvider>
- <DisplayName>One time password technical profiles</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="GenerateOtp">
- <DisplayName>Generate one time password</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
- <Metadata>
- <Item Key="Operation">GenerateCode</Item>
- <Item Key="CodeExpirationInSeconds">1200</Item>
- <Item Key="CodeLength">6</Item>
- <Item Key="CharacterSet">0-9</Item>
- <Item Key="ReuseSameCode">true</Item>
- <Item Key="MaxNumAttempts">5</Item>
- </Metadata>
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
- </InputClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="otp" PartnerClaimType="otpGenerated" />
- </OutputClaims>
- </TechnicalProfile>
-
- <TechnicalProfile Id="VerifyOtp">
- <DisplayName>Verify one time password</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
- <Metadata>
- <Item Key="Operation">VerifyCode</Item>
- </Metadata>
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
- <InputClaim ClaimTypeReferenceId="verificationCode" PartnerClaimType="otpToVerify" />
- </InputClaims>
- </TechnicalProfile>
- </TechnicalProfiles>
-</ClaimsProvider>
+<!--
+<ClaimsProviders> -->
+ <ClaimsProvider>
+ <DisplayName>One time password technical profiles</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="GenerateOtp">
+ <DisplayName>Generate one time password</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Operation">GenerateCode</Item>
+ <Item Key="CodeExpirationInSeconds">1200</Item>
+ <Item Key="CodeLength">6</Item>
+ <Item Key="CharacterSet">0-9</Item>
+ <Item Key="ReuseSameCode">true</Item>
+ <Item Key="NumRetryAttempts">5</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
+ </InputClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="otp" PartnerClaimType="otpGenerated" />
+ </OutputClaims>
+ </TechnicalProfile>
+
+ <TechnicalProfile Id="VerifyOtp">
+ <DisplayName>Verify one time password</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Operation">VerifyCode</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
+ <InputClaim ClaimTypeReferenceId="verificationCode" PartnerClaimType="otpToVerify" />
+ </InputClaims>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+<!--
+</ClaimsProviders> -->
``` ## Add a REST API technical profile
To localize the email, you must send localized strings to Mailjet, or your email
1. Add the following [Localization](localization.md) element. ```xml
- <Localization Enabled="true">
- <SupportedLanguages DefaultLanguage="en" MergeBehavior="Append">
- <SupportedLanguage>en</SupportedLanguage>
- <SupportedLanguage>es</SupportedLanguage>
- </SupportedLanguages>
- <LocalizedResources Id="api.custom-email.en">
- <LocalizedStrings>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">Contoso account email verification code</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Thanks for validating the account</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Your code is</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sincerely</LocalizedString>
- </LocalizedStrings>
- </LocalizedStrings>
- </LocalizedResources>
- <LocalizedResources Id="api.custom-email.es">
- <LocalizedStrings>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">C├│digo de verificaci├│n del correo electr├│nico de la cuenta de Contoso</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Gracias por comprobar la cuenta de </LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Su c├│digo es</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sinceramente</LocalizedString>
- </LocalizedStrings>
- </LocalizedResources>
- </Localization>
+ <!--
+ <BuildingBlocks> -->
+ <Localization Enabled="true">
+ <SupportedLanguages DefaultLanguage="en" MergeBehavior="Append">
+ <SupportedLanguage>en</SupportedLanguage>
+ <SupportedLanguage>es</SupportedLanguage>
+ </SupportedLanguages>
+ <LocalizedResources Id="api.custom-email.en">
+ <LocalizedStrings>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">Contoso account email verification code</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Thanks for validating the account</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Your code is</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sincerely</LocalizedString>
+ </LocalizedStrings>
+ </LocalizedStrings>
+ </LocalizedResources>
+ <LocalizedResources Id="api.custom-email.es">
+ <LocalizedStrings>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">C├│digo de verificaci├│n del correo electr├│nico de la cuenta de Contoso</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Gracias por comprobar la cuenta de </LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Su c├│digo es</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sinceramente</LocalizedString>
+ </LocalizedStrings>
+ </LocalizedResources>
+ </Localization>
+ <!--
+ </BuildingBlocks> -->
``` 1. Add references to the LocalizedResources elements by updating the [ContentDefinitions](contentdefinitions.md) element. ```xml
- <ContentDefinitions>
- <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- <LocalizedResourcesReferences MergeBehavior="Prepend">
- <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
- <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
- </LocalizedResourcesReferences>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- <LocalizedResourcesReferences MergeBehavior="Prepend">
- <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
- <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
- </LocalizedResourcesReferences>
- </ContentDefinition>
- </ContentDefinitions>
+ <!--
+ <BuildingBlocks> -->
+ <ContentDefinitions>
+ <ContentDefinition Id="api.localaccountsignup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <LocalizedResourcesReferences MergeBehavior="Prepend">
+ <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
+ <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
+ </LocalizedResourcesReferences>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <LocalizedResourcesReferences MergeBehavior="Prepend">
+ <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
+ <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
+ </LocalizedResourcesReferences>
+ </ContentDefinition>
+ </ContentDefinitions>
+ <!--
+ </BuildingBlocks> -->
``` 1. Finally, add following input claims transformation to the `LocalAccountSignUpWithLogonEmail` and `LocalAccountDiscoveryUsingEmailAddress` technical profiles.
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-email-sendgrid.md
Previously updated : 10/15/2020 Last updated : 03/10/2021
In your policy, add the following claim types to the `<ClaimsSchema>` element wi
These claims types are necessary to generate and verify the email address using a one-time password (OTP) code. ```xml
-<ClaimType Id="Otp">
- <DisplayName>Secondary One-time password</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="emailRequestBody">
- <DisplayName>SendGrid request body</DisplayName>
- <DataType>string</DataType>
-</ClaimType>
-<ClaimType Id="VerificationCode">
- <DisplayName>Secondary Verification Code</DisplayName>
- <DataType>string</DataType>
- <UserHelpText>Enter your email verification code</UserHelpText>
- <UserInputType>TextBox</UserInputType>
-</ClaimType>
+<!--
+<BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="Otp">
+ <DisplayName>Secondary One-time password</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="emailRequestBody">
+ <DisplayName>SendGrid request body</DisplayName>
+ <DataType>string</DataType>
+ </ClaimType>
+ <ClaimType Id="VerificationCode">
+ <DisplayName>Secondary Verification Code</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Enter your email verification code</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+</BuildingBlocks> -->
``` ## Add the claims transformation
Add the following claims transformation to the `<ClaimsTransformations>` element
* Update the value of the `personalizations.0.dynamic_template_data.subject` subject line input parameter with a subject line appropriate for your organization. ```xml
-<ClaimsTransformation Id="GenerateEmailRequestBody" TransformationMethod="GenerateJson">
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.to.0.email" />
- <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="personalizations.0.dynamic_template_data.otp" />
- <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.dynamic_template_data.email" />
- </InputClaims>
- <InputParameters>
- <!-- Update the template_id value with the ID of your SendGrid template. -->
- <InputParameter Id="template_id" DataType="string" Value="d-989077fbba9746e89f3f6411f596fb96"/>
- <InputParameter Id="from.email" DataType="string" Value="my_email@mydomain.com"/>
- <!-- Update with a subject line appropriate for your organization. -->
- <InputParameter Id="personalizations.0.dynamic_template_data.subject" DataType="string" Value="Contoso account email verification code"/>
- </InputParameters>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="emailRequestBody" TransformationClaimType="outputClaim"/>
- </OutputClaims>
-</ClaimsTransformation>
+<!--
+<BuildingBlocks>
+ <ClaimsTransformations> -->
+ <ClaimsTransformation Id="GenerateEmailRequestBody" TransformationMethod="GenerateJson">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.to.0.email" />
+ <InputClaim ClaimTypeReferenceId="otp" TransformationClaimType="personalizations.0.dynamic_template_data.otp" />
+ <InputClaim ClaimTypeReferenceId="email" TransformationClaimType="personalizations.0.dynamic_template_data.email" />
+ </InputClaims>
+ <InputParameters>
+ <!-- Update the template_id value with the ID of your SendGrid template. -->
+ <InputParameter Id="template_id" DataType="string" Value="d-989077fbba9746e89f3f6411f596fb96"/>
+ <InputParameter Id="from.email" DataType="string" Value="my_email@mydomain.com"/>
+ <!-- Update with a subject line appropriate for your organization. -->
+ <InputParameter Id="personalizations.0.dynamic_template_data.subject" DataType="string" Value="Contoso account email verification code"/>
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="emailRequestBody" TransformationClaimType="outputClaim"/>
+ </OutputClaims>
+ </ClaimsTransformation>
+ <!--
+ </ClaimsTransformations>
+</BuildingBlocks> -->
``` ## Add DataUri content definition
Add the following claims transformation to the `<ClaimsTransformations>` element
Below the claims transformations within `<BuildingBlocks>`, add the following [ContentDefinition](contentdefinitions.md) to reference the version 2.1.0 data URI: ```xml
-<ContentDefinitions>
- <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- </ContentDefinition>
-</ContentDefinitions>
+<!--
+<BuildingBlocks> -->
+ <ContentDefinitions>
+ <ContentDefinition Id="api.localaccountsignup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ </ContentDefinition>
+ </ContentDefinitions>
+<!--
+</BuildingBlocks> -->
``` ## Create a DisplayControl
This example display control is configured to:
Under content definitions, still within `<BuildingBlocks>`, add the following [DisplayControl](display-controls.md) of type [VerificationControl](display-control-verification.md) to your policy. ```xml
-<DisplayControls>
- <DisplayControl Id="emailVerificationControl" UserInterfaceControlType="VerificationControl">
- <DisplayClaims>
- <DisplayClaim ClaimTypeReferenceId="email" Required="true" />
- <DisplayClaim ClaimTypeReferenceId="verificationCode" ControlClaimType="VerificationCode" Required="true" />
- </DisplayClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="email" />
- </OutputClaims>
- <Actions>
- <Action Id="SendCode">
- <ValidationClaimsExchange>
- <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="GenerateOtp" />
- <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="SendOtp" />
- </ValidationClaimsExchange>
- </Action>
- <Action Id="VerifyCode">
- <ValidationClaimsExchange>
- <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="VerifyOtp" />
- </ValidationClaimsExchange>
- </Action>
- </Actions>
- </DisplayControl>
-</DisplayControls>
+<!--
+<BuildingBlocks> -->
+ <DisplayControls>
+ <DisplayControl Id="emailVerificationControl" UserInterfaceControlType="VerificationControl">
+ <DisplayClaims>
+ <DisplayClaim ClaimTypeReferenceId="email" Required="true" />
+ <DisplayClaim ClaimTypeReferenceId="verificationCode" ControlClaimType="VerificationCode" Required="true" />
+ </DisplayClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="email" />
+ </OutputClaims>
+ <Actions>
+ <Action Id="SendCode">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="GenerateOtp" />
+ <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="SendOtp" />
+ </ValidationClaimsExchange>
+ </Action>
+ <Action Id="VerifyCode">
+ <ValidationClaimsExchange>
+ <ValidationClaimsExchangeTechnicalProfile TechnicalProfileReferenceId="VerifyOtp" />
+ </ValidationClaimsExchange>
+ </Action>
+ </Actions>
+ </DisplayControl>
+ </DisplayControls>
+<!--
+</BuildingBlocks> -->
``` ## Add OTP technical profiles
The `GenerateOtp` technical profile generates a code for the email address. The
Add the following technical profiles to the `<ClaimsProviders>` element. ```xml
-<ClaimsProvider>
- <DisplayName>One time password technical profiles</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="GenerateOtp">
- <DisplayName>Generate one time password</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
- <Metadata>
- <Item Key="Operation">GenerateCode</Item>
- <Item Key="CodeExpirationInSeconds">1200</Item>
- <Item Key="CodeLength">6</Item>
- <Item Key="CharacterSet">0-9</Item>
- <Item Key="ReuseSameCode">true</Item>
- <Item Key="MaxNumAttempts">5</Item>
- </Metadata>
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
- </InputClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="otp" PartnerClaimType="otpGenerated" />
- </OutputClaims>
- </TechnicalProfile>
-
- <TechnicalProfile Id="VerifyOtp">
- <DisplayName>Verify one time password</DisplayName>
- <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
- <Metadata>
- <Item Key="Operation">VerifyCode</Item>
- </Metadata>
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
- <InputClaim ClaimTypeReferenceId="verificationCode" PartnerClaimType="otpToVerify" />
- </InputClaims>
- </TechnicalProfile>
- </TechnicalProfiles>
-</ClaimsProvider>
+<!--
+<ClaimsProviders> -->
+ <ClaimsProvider>
+ <DisplayName>One time password technical profiles</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="GenerateOtp">
+ <DisplayName>Generate one time password</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Operation">GenerateCode</Item>
+ <Item Key="CodeExpirationInSeconds">1200</Item>
+ <Item Key="CodeLength">6</Item>
+ <Item Key="CharacterSet">0-9</Item>
+ <Item Key="ReuseSameCode">true</Item>
+ <Item Key="NumRetryAttempts">5</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
+ </InputClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="otp" PartnerClaimType="otpGenerated" />
+ </OutputClaims>
+ </TechnicalProfile>
+
+ <TechnicalProfile Id="VerifyOtp">
+ <DisplayName>Verify one time password</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.OneTimePasswordProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="Operation">VerifyCode</Item>
+ </Metadata>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="email" PartnerClaimType="identifier" />
+ <InputClaim ClaimTypeReferenceId="verificationCode" PartnerClaimType="otpToVerify" />
+ </InputClaims>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+<!--
+</ClaimsProviders> -->
``` ## Add a REST API technical profile
To localize the email, you must send localized strings to SendGrid, or your emai
1. Add the following [Localization](localization.md) element. ```xml
- <Localization Enabled="true">
- <SupportedLanguages DefaultLanguage="en" MergeBehavior="Append">
- <SupportedLanguage>en</SupportedLanguage>
- <SupportedLanguage>es</SupportedLanguage>
- </SupportedLanguages>
- <LocalizedResources Id="api.custom-email.en">
- <LocalizedStrings>
- <!--Email template parameters-->
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">Contoso account email verification code</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Thanks for validating the account</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Your code is</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sincerely</LocalizedString>
- </LocalizedStrings>
- </LocalizedResources>
- <LocalizedResources Id="api.custom-email.es">
- <LocalizedStrings>
- <!--Email template parameters-->
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">C├│digo de verificaci├│n del correo electr├│nico de la cuenta de Contoso</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Gracias por comprobar la cuenta de </LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Su c├│digo es</LocalizedString>
- <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sinceramente</LocalizedString>
- </LocalizedStrings>
- </LocalizedResources>
- </Localization>
+ <!--
+ <BuildingBlocks> -->
+ <Localization Enabled="true">
+ <SupportedLanguages DefaultLanguage="en" MergeBehavior="Append">
+ <SupportedLanguage>en</SupportedLanguage>
+ <SupportedLanguage>es</SupportedLanguage>
+ </SupportedLanguages>
+ <LocalizedResources Id="api.custom-email.en">
+ <LocalizedStrings>
+ <!--Email template parameters-->
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">Contoso account email verification code</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Thanks for validating the account</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Your code is</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sincerely</LocalizedString>
+ </LocalizedStrings>
+ </LocalizedResources>
+ <LocalizedResources Id="api.custom-email.es">
+ <LocalizedStrings>
+ <!--Email template parameters-->
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_subject">C├│digo de verificaci├│n del correo electr├│nico de la cuenta de Contoso</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_message">Gracias por comprobar la cuenta de </LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_code">Su c├│digo es</LocalizedString>
+ <LocalizedString ElementType="GetLocalizedStringsTransformationClaimType" StringId="email_signature">Sinceramente</LocalizedString>
+ </LocalizedStrings>
+ </LocalizedResources>
+ </Localization>
+ <!--
+ </BuildingBlocks> -->
``` 1. Add references to the LocalizedResources elements by updating the [ContentDefinitions](contentdefinitions.md) element. ```XML
- <ContentDefinitions>
- <ContentDefinition Id="api.localaccountsignup">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- <LocalizedResourcesReferences MergeBehavior="Prepend">
- <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
- <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
- </LocalizedResourcesReferences>
- </ContentDefinition>
- <ContentDefinition Id="api.localaccountpasswordreset">
- <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
- <LocalizedResourcesReferences MergeBehavior="Prepend">
- <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
- <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
- </LocalizedResourcesReferences>
- </ContentDefinition>
- </ContentDefinitions>
+ <!--
+ <BuildingBlocks> -->
+ <ContentDefinitions>
+ <ContentDefinition Id="api.localaccountsignup">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <LocalizedResourcesReferences MergeBehavior="Prepend">
+ <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
+ <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
+ </LocalizedResourcesReferences>
+ </ContentDefinition>
+ <ContentDefinition Id="api.localaccountpasswordreset">
+ <DataUri>urn:com:microsoft:aad:b2c:elements:contract:selfasserted:2.1.0</DataUri>
+ <LocalizedResourcesReferences MergeBehavior="Prepend">
+ <LocalizedResourcesReference Language="en" LocalizedResourcesReferenceId="api.custom-email.en" />
+ <LocalizedResourcesReference Language="es" LocalizedResourcesReferenceId="api.custom-email.es" />
+ </LocalizedResourcesReferences>
+ </ContentDefinition>
+ </ContentDefinitions>
+ <!--
+ </BuildingBlocks> -->
``` 1. Finally, add following input claims transformation to the `LocalAccountSignUpWithLogonEmail` and `LocalAccountDiscoveryUsingEmailAddress` technical profiles.
active-directory-b2c Direct Signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/direct-signin.md
The domain hint query string parameter can set to one of the following domains:
::: zone pivot="b2c-custom-policy"
-To support domain hing parameter, you can configure the domain name using the `<Domain>domain name</Domain>` XML element of any `<ClaimsProvider>`.
+To support domain hint parameter, you can configure the domain name using the `<Domain>domain name</Domain>` XML element of any `<ClaimsProvider>`.
```xml <ClaimsProvider>
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/localization-string-ids.md
Previously updated : 03/08/2021 Last updated : 03/10/2021
The following are the IDs for a [Verification display control](display-control-v
| ID | Default value | | -- | - |
-|intro_msg| Verification is necessary. Please click Send button.|
+|intro_msg <sup>*</sup>| Verification is necessary. Please click Send button.|
|success_send_code_msg | Verification code has been sent. Please copy it to the input box below.| |failure_send_code_msg | We are having trouble verifying your email address. Please enter a valid email address and try again.| |success_verify_code_msg | E-mail address verified. You can now continue.|
The following are the IDs for a [Verification display control](display-control-v
|but_send_new_code | Send new code| |but_change_claims | Change e-mail|
+Note: The `intro_msg` element is hidden, and not shown on the self-asserted page. To make it visible, use the [HTML customiztion](customize-ui-with-html.md) with Cascading Style Sheets. For example:
+
+```css
+.verificationInfoText div{display: block!important}
+```
+ ### Verification display control example ```xml
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/openid-connect.md
Previously updated : 10/12/2020 Last updated : 03/10/2021
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| redirect_uri | No | The `redirect_uri` parameter of your application, where authentication responses can be sent and received by your application. It must exactly match one of the `redirect_uri` parameters that you registered in the Azure portal, except that it must be URL encoded. | | response_mode | No | The method that is used to send the resulting authorization code back to your application. It can be either `query`, `form_post`, or `fragment`. The `form_post` response mode is recommended for best security. | | state | No | A value included in the request that's also returned in the token response. It can be a string of any content that you want. A randomly generated unique value is typically used for preventing cross-site request forgery attacks. The state is also used to encode information about the user's state in the application before the authentication request occurred, such as the page they were on. |
+| login_hint | No| Can be used to pre-fill the sign-in name field of the sign-in page. For more information, see [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name). |
+| domain_hint | No| Provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. If a valid value is included, the user goes directly to the identity provider sign-in page. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider). |
+| Custom parameters | No| Custom parameters that can be used with [custom policies](custom-policy-overview.md). For example, [dynamic custom page content URI](customize-ui-with-html.md?pivots=b2c-custom-policy#configure-dynamic-custom-page-content-uri), or [key-value claim resolvers](claim-resolver-overview.md#oauth2-key-value-parameters). |
At this point, the user is asked to complete the workflow. The user might have to enter their username and password, sign in with a social identity, or sign up for the directory. There could be any other number of steps depending on how the user flow is defined.
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 10/26/2020 Last updated : 03/10/2021
You can also call a REST API technical profile with your business logic, overwri
| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). | | setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
+|forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). |
Notes: 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`. 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`. [Page layout version](page-layout.md) 1.1.0 and above. 1. Available for [page layout version](page-layout.md) 1.2.0 and above.
+1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`. [Page layout version](page-layout.md) 2.1.2 and above.
## Cryptographic keys
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/technicalprofiles.md
Title: TechnicalProfiles
+ Title: Technical profiles
description: Specify the TechnicalProfiles element of a custom policy in Azure Active Directory B2C.
-# TechnicalProfiles
+# Technical profiles
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-A technical profile provides a framework with a built-in mechanism to communicate with different type of parties. Technical profiles are used to communicate with your Azure AD B2C tenant, to create a user, or read a user profile. A technical profile can be self-asserted to enable interaction with the user. For example, collect the user's credential to sign in and then render the sign-up page or password reset page.
+A *technical profile* provides a framework with a built-in mechanism to communicate with different types of parties. Technical profiles are used to communicate with your Azure Active Directory B2C (Azure AD B2C) tenant to create a user or read a user profile. A technical profile can be self-asserted to enable interaction with the user. For example, a technical profile can collect the user's credential to sign in and then render the sign-up page or password reset page.
-## Type of technical profiles
+## Types of technical profiles
A technical profile enables these types of scenarios: -- [Application Insights](analytics-with-application-insights.md) - Sending event data to [Application Insights](../azure-monitor/app/app-insights-overview.md).-- [Azure Active Directory](active-directory-technical-profile.md) - Provides support for the Azure Active Directory B2C user management.-- [Azure AD Multi-Factor Authentication](multi-factor-auth-technical-profile.md) - provides support for verifying a phone number by using Azure AD Multi-Factor Authentication (MFA). -- [Claims transformation](claims-transformation-technical-profile.md) - Call output claims transformations to manipulate claims values, validate claims, or set default values for a set of output claims.-- [ID token hint](id-token-hint.md) - Validates `id_token_hint` JWT token signature, the issuer name, and the token audience and extracts the claim from the inbound token.-- [JWT token issuer](jwt-issuer-technical-profile.md) - Emits a JWT token that is returned back to the relying party application.-- [OAuth1](oauth1-technical-profile.md) - Federation with any OAuth 1.0 protocol identity provider.-- [OAuth2](oauth2-technical-profile.md) - Federation with any OAuth 2.0 protocol identity provider.-- [One time password](one-time-password-technical-profile.md) - Provides support for managing the generation and verification of a one-time password.-- [OpenID Connect](openid-connect-technical-profile.md) - Federation with any OpenID Connect protocol identity provider.-- [Phone factor](phone-factor-technical-profile.md) - Support for enrolling and verifying phone numbers.-- [RESTful provider](restful-technical-profile.md) - Call to REST API services, such as validate user input, enrich user data, or integrate with line-of-business applications.-- [SAML identity provider](identity-provider-generic-saml.md) - Federation with any SAML protocol identity provider.-- [SAML token issuer](saml-service-provider.md) - Emits a SAML token that is returned back to the relying party application.-- [Self-Asserted](self-asserted-technical-profile.md) - Interact with the user. For example, collect the user's credential to sign in, render the sign-up page, or password reset.-- [Session management](custom-policy-reference-sso.md) - Handle different types of sessions.
+- [Application Insights](analytics-with-application-insights.md): Sends event data to [Application Insights](../azure-monitor/app/app-insights-overview.md).
+- [Azure AD](active-directory-technical-profile.md): Provides support for the Azure AD B2C user management.
+- [Azure AD multifactor authentication](multi-factor-auth-technical-profile.md): Provides support for verifying a phone number by using Azure AD multifactor authentication.
+- [Claims transformation](claims-transformation-technical-profile.md): Calls output claims transformations to manipulate claims values, validate claims, or set default values for a set of output claims.
+- [ID token hint](id-token-hint.md): Validates the `id_token_hint` JWT token signature, the issuer name, and the token audience, and extracts the claim from the inbound token.
+- [JWT token issuer](jwt-issuer-technical-profile.md): Emits a JWT token that's returned back to the relying party application.
+- [OAuth1](oauth1-technical-profile.md): Federation with any OAuth 1.0 protocol identity provider.
+- [OAuth2](oauth2-technical-profile.md): Federation with any OAuth 2.0 protocol identity provider.
+- [One-time password](one-time-password-technical-profile.md): Provides support for managing the generation and verification of a one-time password.
+- [OpenID Connect](openid-connect-technical-profile.md): Federation with any OpenID Connect protocol identity provider.
+- [Phone factor](phone-factor-technical-profile.md): Supports enrolling and verifying phone numbers.
+- [RESTful provider](restful-technical-profile.md): Calls REST API services, such as validating user input, enriching user data, or integrating with line-of-business applications.
+- [SAML identity provider](identity-provider-generic-saml.md): Federation with any SAML protocol identity provider.
+- [SAML token issuer](saml-service-provider.md): Emits a SAML token that's returned back to the relying party application.
+- [Self-asserted](self-asserted-technical-profile.md): Interacts with the user. For example, collects the user's credential to sign in, render the sign-up page, or reset password.
+- [Session management](custom-policy-reference-sso.md): Handles different types of sessions.
## Technical profile flow
-All types of technical profiles share the same concept. Start by reading the input claims, run claims transformation. Then communicate with the configured party, such as an identity provider, REST API, or Azure AD directory services. After the process is completed, the technical profile returns the output claims and may run output claims transformation. The following diagram shows how the transformations and mappings referenced in the technical profile are processed. After the claims transformation is executed, the output claims are immediately stored in the claims bag. Regardless of the party the technical profile interacts with.
+All types of technical profiles share the same concept. They start by reading the input claims and run claims transformations. Then they communicate with the configured party, such as an identity provider, REST API, or Azure AD directory services. After the process is completed, the technical profile returns the output claims and might run output claims transformations. The following diagram shows how the transformations and mappings referenced in the technical profile are processed. After the claims transformation is executed, the output claims are immediately stored in the claims bag, regardless of the party the technical profile interacts with.
-![Diagram illustrating the technical profile flow](./media/technical-profiles/technical-profile-flow.png)
+![Diagram that illustrates the technical profile flow.](./media/technical-profiles/technical-profile-flow.png)
-1. **Single sign-on (SSO) session management** - Restores technical profile's session state, using [SSO session management](custom-policy-reference-sso.md).
-1. **Input claims transformation** - Before the technical profile is started, Azure AD B2C runs input [claims transformation](claimstransformations.md).
-1. **Input claims** - Claims are picked up from the claims bag that are used for the technical profile.
-1. **Technical profile execution** - The technical profile exchanges the claims with the configured party. For example:
- - Redirect the user to the identity provider to complete the sign-in. After successful sign-in, the user returns back and the technical profile execution continues.
- - Call a REST API while sending parameters as InputClaims and getting information back as OutputClaims.
- - Create or update the user account.
- - Sends and verifies the MFA text message.
-1. **Validation technical profiles** - A [self-asserted technical profile](self-asserted-technical-profile.md) can call [validation technical profiles](validation-technical-profile.md) to validate the data profiled by the user.
-1. **Output claims** - Claims are returned back to the claims bag. You can use those claims in the next orchestrations step, or output claims transformations.
-1. **Output claims transformations** - After the technical profile is completed, Azure AD B2C runs output [claims transformation](claimstransformations.md).
-1. **Single sign-on (SSO) session management** - Persists technical profile's data to the session, using [SSO session management](custom-policy-reference-sso.md).
+1. **Single sign-on (SSO) session management**: Restores the technical profile's session state by using [SSO session management](custom-policy-reference-sso.md).
+1. **Input claims transformation**: Before the technical profile is started, Azure AD B2C runs input [claims transformation](claimstransformations.md).
+1. **Input claims**: Claims are picked up from the claims bag that are used for the technical profile.
+1. **Technical profile execution**: The technical profile exchanges the claims with the configured party. For example:
+ - Redirects the user to the identity provider to complete the sign-in. After successful sign-in, the user returns back and the technical profile execution continues.
+ - Calls a REST API while sending parameters as InputClaims and getting information back as OutputClaims.
+ - Creates or updates the user account.
+ - Sends and verifies the multifactor authentication text message.
+1. **Validation technical profiles**: A [self-asserted technical profile](self-asserted-technical-profile.md) can call [validation technical profiles](validation-technical-profile.md) to validate the data profiled by the user.
+1. **Output claims**: Claims are returned back to the claims bag. You can use those claims in the next orchestrations step or output claims transformations.
+1. **Output claims transformations**: After the technical profile is completed, Azure AD B2C runs output [claims transformations](claimstransformations.md).
+1. **SSO session management**: Persists the technical profile's data to the session by using [SSO session management](custom-policy-reference-sso.md).
-A **TechnicalProfiles** element contains a set of technical profiles supported by the claim provider. Every claims provider must have at least one technical profile. The technical profile determines the endpoints, and the protocols needed to communicate with the claims provider. A claims provider can have multiple technical profiles.
+A **TechnicalProfiles** element contains a set of technical profiles supported by the claims provider. Every claims provider must have at least one technical profile. The technical profile determines the endpoints and the protocols needed to communicate with the claims provider. A claims provider can have multiple technical profiles.
```xml <ClaimsProvider>
The **TechnicalProfile** element contains the following attribute:
| Attribute | Required | Description | ||||
-| Id | Yes | A unique identifier of the technical profile. The technical profile can be referenced using this identifier from other elements in the policy file. For example, **OrchestrationSteps** and **ValidationTechnicalProfile**. |
+| Id | Yes | A unique identifier of the technical profile. The technical profile can be referenced by using this identifier from other elements in the policy file. Examples are **OrchestrationSteps** and **ValidationTechnicalProfile**. |
-The **TechnicalProfile** contains the following elements:
+The **TechnicalProfile** element contains the following elements:
| Element | Occurrences | Description | | - | -- | -- |
The **TechnicalProfile** contains the following elements:
| DisplayName | 1:1 | The display name of the technical profile. | | Description | 0:1 | The description of the technical profile. | | Protocol | 1:1 | The protocol used for the communication with the other party. |
-| Metadata | 0:1 | A collection of key/value that controls the behavior of the technical profile. |
-| InputTokenFormat | 0:1 | The format of the input token. Possible values: `JSON`, `JWT`, `SAML11`, or `SAML2`. The `JWT` value represents a JSON Web Token as per IETF specification. The `SAML11` value represents a SAML 1.1 security token as per OASIS specification. The `SAML2` value represents a SAML 2.0 security token as per OASIS specification. |
-| OutputTokenFormat | 0:1 | The format of the output token. Possible values: `JSON`, `JWT`, `SAML11`, or `SAML2`. |
+| Metadata | 0:1 | A set of keys and values that controls the behavior of the technical profile. |
+| InputTokenFormat | 0:1 | The format of the input token. Possible values are `JSON`, `JWT`, `SAML11`, or `SAML2`. The `JWT` value represents a JSON Web Token per the IETF specification. The `SAML11` value represents a SAML 1.1 security token per the OASIS specification. The `SAML2` value represents a SAML 2.0 security token per the OASIS specification. |
+| OutputTokenFormat | 0:1 | The format of the output token. Possible values are `JSON`, `JWT`, `SAML11`, or `SAML2`. |
| CryptographicKeys | 0:1 | A list of cryptographic keys that are used in the technical profile. | | InputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed before any claims are sent to the claims provider or the relying party. |
-| InputClaims | 0:1 | A list of the previously defined references to claim types that are taken as input in the technical profile. |
-| PersistedClaims | 0:1 | A list of the previously defined references to claim types that will be persisted by the technical profile. |
-| DisplayClaims | 0:1 | A list of the previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). The DisplayClaims feature is currently in **preview**. |
-| OutputClaims | 0:1 | A list of the previously defined references to claim types that are taken as output in the technical profile. |
+| InputClaims | 0:1 | A list of previously defined references to claim types that are taken as input in the technical profile. |
+| PersistedClaims | 0:1 | A list of previously defined references to claim types that will be persisted by the technical profile. |
+| DisplayClaims | 0:1 | A list of previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). The DisplayClaims feature is currently in preview. |
+| OutputClaims | 0:1 | A list of previously defined references to claim types that are taken as output in the technical profile. |
| OutputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed after the claims are received from the claims provider. |
-| ValidationTechnicalProfiles | 0:n | A list of references to other technical profiles that the technical profile uses for validation purposes. For more information, see [validation technical profile](validation-technical-profile.md)|
-| SubjectNamingInfo | 0:1 | Controls the production of the subject name in tokens where the subject name is specified separately from claims. For example, OAuth or SAML. |
-| IncludeInSso | 0:1 | Whether usage of this technical profile should apply single sign-on (SSO) behavior for the session, or instead require explicit interaction. This element is valid only in SelfAsserted profiles used within a Validation technical profile. Possible values: `true` (default), or `false`. |
+| ValidationTechnicalProfiles | 0:n | A list of references to other technical profiles that the technical profile uses for validation purposes. For more information, see [Validation technical profile](validation-technical-profile.md).|
+| SubjectNamingInfo | 0:1 | Controls the production of the subject name in tokens where the subject name is specified separately from claims. Examples are OAuth or SAML. |
+| IncludeInSso | 0:1 | Whether usage of this technical profile should apply SSO behavior for the session or instead require explicit interaction. This element is valid only in SelfAsserted profiles used within a validation technical profile. Possible values are `true` (default) or `false`. |
| IncludeClaimsFromTechnicalProfile | 0:1 | An identifier of a technical profile from which you want all of the input and output claims to be added to this technical profile. The referenced technical profile must be defined in the same policy file. | | IncludeTechnicalProfile |0:1 | An identifier of a technical profile from which you want all data to be added to this technical profile. | | UseTechnicalProfileForSessionManagement | 0:1 | A different technical profile to be used for session management. |
-|EnabledForUserJourneys| 0:1 |Controls if the technical profile is executed in a user journey. |
+|EnabledForUserJourneys| 0:1 |Controls if the technical profile is executed in a user journey. |
## Protocol
-The **Protocol** specifies the protocol to be used for the communication with the other party. The **Protocol** element contains the following attributes:
+The **Protocol** element specifies the protocol to be used for the communication with the other party. The **Protocol** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
-| Name | Yes | The name of a valid protocol supported by Azure AD B2C that is used as part of the technical profile. Possible values: `OAuth1`, `OAuth2`, `SAML2`, `OpenIdConnect`, `Proprietary`, or `None`. |
-| Handler | No | When the protocol name is set to `Proprietary`, specify the name of the assembly that is used by Azure AD B2C to determine the protocol handler. |
+| Name | Yes | The name of a valid protocol supported by Azure AD B2C that's used as part of the technical profile. Possible values are `OAuth1`, `OAuth2`, `SAML2`, `OpenIdConnect`, `Proprietary`, or `None`. |
+| Handler | No | When the protocol name is set to `Proprietary`, specifies the name of the assembly that's used by Azure AD B2C to determine the protocol handler. |
## Metadata
-The **Metadata** element contains the relevant configuration options to a specific protocol. The list of supported metadata is documented in the corresponding [technical profile](#type-of-technical-profiles) specification. A **Metadata** element contains the following element:
+The **Metadata** element contains the relevant configuration options to a specific protocol. The list of supported metadata is documented in the corresponding [technical profile](#types-of-technical-profiles) specification. A **Metadata** element contains the following element:
| Element | Occurrences | Description | | - | -- | -- |
-| Item | 0:n | The metadata that relates to the technical profile. Each type of technical profile has a different set of metadata items. For more information, see the technical profile types section. |
+| Item | 0:n | The metadata that relates to the technical profile. Each type of technical profile has a different set of metadata items. For more information, see the technical profile types section. |
### Item
The **Item** element of the **Metadata** element contains the following attribut
| Attribute | Required | Description | | | -- | -- |
-| Key | Yes | The metadata key. See each [technical profile type](#type-of-technical-profiles), for the list of metadata items. |
+| Key | Yes | The metadata key. See each [technical profile type](#types-of-technical-profiles) for the list of metadata items. |
-The following example illustrates the use of metadata relevant to [OAuth2 technical profile](oauth2-technical-profile.md#metadata).
+The following example illustrates the use of metadata relevant to the [OAuth2 technical profile](oauth2-technical-profile.md#metadata).
```xml <TechnicalProfile Id="Facebook-OAUTH">
The following example illustrates the use of metadata relevant to [OAuth2 techni
</TechnicalProfile> ```
-The following example illustrates the use of metadata relevant to [REST API technical profile](restful-technical-profile.md#metadata).
+The following example illustrates the use of metadata relevant to the [REST API technical profile](restful-technical-profile.md#metadata).
```xml <TechnicalProfile Id="REST-Validate-Email">
The following example illustrates the use of metadata relevant to [REST API tech
## Cryptographic keys
-To establish trust with the services it integrates with, Azure AD B2C stores secrets and certificates in the form of [policy keys](policy-keys-overview.md). During the technical profile executing, Azure AD B2C retrieves the cryptographic keys from Azure AD B2C policy keys. Then uses the keys establish trust, encrypt or sign a token. These trusts consist of:
+To establish trust with the services it integrates with, Azure AD B2C stores secrets and certificates in the form of [policy keys](policy-keys-overview.md). During the technical profile execution, Azure AD B2C retrieves the cryptographic keys from Azure AD B2C policy keys. Then Azure AD B2C uses the keys to establish trust or encrypt or sign a token. These trusts consist of:
-- Federation with [OAuth1](oauth1-technical-profile.md#cryptographic-keys), [OAuth2](oauth2-technical-profile.md#cryptographic-keys), and [SAML](identity-provider-generic-saml.md) identity providers-- Secure the connecting with [REST API services](secure-rest-api.md)-- Signing and encryption the [JWT](jwt-issuer-technical-profile.md#cryptographic-keys) and [SAML](saml-service-provider.md) tokens
+- Federation with [OAuth1](oauth1-technical-profile.md#cryptographic-keys), [OAuth2](oauth2-technical-profile.md#cryptographic-keys), and [SAML](identity-provider-generic-saml.md) identity providers.
+- Securing the connection with [REST API services](secure-rest-api.md).
+- Signing and encrypting the [JWT](jwt-issuer-technical-profile.md#cryptographic-keys) and [SAML](saml-service-provider.md) tokens.
The **CryptographicKeys** element contains the following element:
The **Key** element contains the following attribute:
## Input claims transformations
-The **InputClaimsTransformations** element may contain a collection of input claims transformation elements that are used to modify input claims or generate new one.
+The **InputClaimsTransformations** element might contain a collection of input claims transformation elements that are used to modify input claims or generate new ones.
-The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation allowing you to have a sequence of claims transformation depending on each other.
+The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation. In this way, you can have a sequence of claims transformations that depend on each other.
The **InputClaimsTransformations** element contains the following element:
The **InputClaimsTransformation** element contains the following attribute:
| | -- | -- | | ReferenceId | Yes | An identifier of a claims transformation already defined in the policy file or parent policy file. |
-The following technical profiles reference to the **CreateOtherMailsFromEmail** claims transformation. The claims transformation adds the value of the `email` claim to the `otherMails` collection, before persisting the data to the directory.
+The following technical profiles reference the **CreateOtherMailsFromEmail** claims transformation. The claims transformation adds the value of the `email` claim to the `otherMails` collection before persisting the data to the directory.
```xml <TechnicalProfile Id="AAD-UserWriteUsingAlternativeSecurityId">
The following technical profiles reference to the **CreateOtherMailsFromEmail**
## Input claims
-The **InputClaims** picks up claims from the claims bag and are used for the technical profile. For example, a [self-asserted technical profile](self-asserted-technical-profile.md) uses the input claims to prepopulate the output claims that the user provides. A REST API technical profile uses the input claims to send input parameters to the REST API endpoint. Azure Active Directory uses input claim as a unique identifier to read, update, or delete an account.
+The **InputClaims** element picks up claims from the claims bag that are used for the technical profile. For example, a [self-asserted technical profile](self-asserted-technical-profile.md) uses the input claims to prepopulate the output claims that the user provides. A REST API technical profile uses the input claims to send input parameters to the REST API endpoint. Azure AD uses an input claim as a unique identifier to read, update, or delete an account.
The **InputClaims** element contains the following element:
The **InputClaim** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- |
-| ClaimTypeReferenceId | Yes | The identifier of a claim type. The claim is already defined in the claims schema section in the policy file, or parent policy file. |
-| DefaultValue | No | A default value to use to create a claim if the claim indicated by ClaimTypeReferenceId does not exist so that the resulting claim can be used as an InputClaim by the technical profile. |
-| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute is not specified, then the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. |
+| ClaimTypeReferenceId | Yes | The identifier of a claim type. The claim is already defined in the claims schema section in the policy file or parent policy file. |
+| DefaultValue | No | A default value to use to create a claim if the claim indicated by ClaimTypeReferenceId doesn't exist so that the resulting claim can be used as an InputClaim element by the technical profile. |
+| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute isn't specified, the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. An example is if the first claim name is *givenName*, while the partner uses a claim named *first_name*. |
## Display claims
-The **DisplayClaims** element contains a list of claims to be presented on the screen to collect data from the user. In the display claims collection, you can include a reference to a [claim type](claimsschema.md), or a [DisplayControl](display-controls.md) that you've created.
+The **DisplayClaims** element contains a list of claims to be presented on the screen to collect data from the user. In the display claims collection, you can include a reference to a [claim type](claimsschema.md) or a [display control](display-controls.md) that you've created.
-- A claim type is a reference to a claim to be displayed on the screen.
+- A claim type is a reference to a claim to be displayed on the screen.
- To force the user to provide a value for a specific claim, set the **Required** attribute of the **DisplayClaim** element to `true`.
- - To prepopulate the values of display claims, use the input claims that were previously described. The element may also contain a default value.
- - The **ClaimType** element in the **DisplayClaims** collection needs to set the **UserInputType** element to any user input type supported by Azure AD B2C. For example, `TextBox` or `DropdownSingleSelect`.
-- A display control is a user interface element that has special functionality and interacts with the Azure AD B2C back-end service. It allows the user to perform actions on the page that invoke a validation technical profile at the back end. For example, verifying an email address, phone number, or customer loyalty number.
+ - To prepopulate the values of display claims, use the input claims that were previously described. The element might also contain a default value.
+ - The **ClaimType** element in the **DisplayClaims** collection needs to set the **UserInputType** element to any user input type supported by Azure AD B2C. Examples are `TextBox` or `DropdownSingleSelect`.
+- A display control is a user interface element that has special functionality and interacts with the Azure AD B2C back-end service. It allows the user to perform actions on the page that invoke a validation technical profile at the back end. An example is verifying an email address, phone number, or customer loyalty number.
-The order of the elements in **DisplayClaims** specifies the order in which Azure AD B2C renders the claims on the screen.
+The order of the elements in **DisplayClaims** specifies the order in which Azure AD B2C renders the claims on the screen.
The **DisplayClaims** element contains the following element:
The **DisplayClaim** element contains the following attributes:
| DisplayControlReferenceId | No | The identifier of a [display control](display-controls.md) already defined in the ClaimsSchema section in the policy file or parent policy file. | | Required | No | Indicates whether the display claim is required. |
-The following example illustrates the use of display claims and display controls with in a self-asserted technical profile.
+The following example illustrates the use of display claims and display controls in a self-asserted technical profile.
-![A self-asserted technical profile with display claims](./media/technical-profiles/display-claims.png)
+![Screenshot that shows a self-asserted technical profile with display claims.](./media/technical-profiles/display-claims.png)
In the following technical profile: - The first display claim makes a reference to the `emailVerificationControl` display control, which collects and verifies the email address. - The fifth display claim makes a reference to the `phoneVerificationControl` display control, which collects and verifies a phone number.-- The other display claims are ClaimTypes to be collected from the user.
+- The other display claims are ClaimType elements to be collected from the user.
```xml <TechnicalProfile Id="Id">
In the following technical profile:
</TechnicalProfile> ```
-### Persisted claims
+## Persisted claims
-The **PersistedClaims** element contains all of the values that should be persisted by [Azure AD technical profile](active-directory-technical-profile.md) with possible mapping information between a claim type already defined in the [ClaimsSchema](claimsschema.md) section in the policy and the Azure AD attribute name.
+The **PersistedClaims** element contains all of the values that should be persisted by an [Azure AD technical profile](active-directory-technical-profile.md) with possible mapping information between a claim type already defined in the [ClaimsSchema](claimsschema.md) section in the policy and the Azure AD attribute name.
The name of the claim is the name of the [Azure AD attribute](user-profile-attributes.md) unless the **PartnerClaimType** attribute is specified, which contains the Azure AD attribute name.
-The **PersistedClaims** element contains the following elements:
+The **PersistedClaims** element contains the following element:
| Element | Occurrences | Description | | - | -- | -- |
The **PersistedClaim** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | ClaimTypeReferenceId | Yes | The identifier of a claim type already defined in the ClaimsSchema section in the policy file or parent policy file. |
-| DefaultValue | No | A default value to use to create a claim if the claim does not exist. |
-| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute is not specified, then the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. |
+| DefaultValue | No | A default value to use to create a claim if the claim doesn't exist. |
+| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the PartnerClaimType attribute isn't specified, the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. An example is if the first claim name is *givenName*, while the partner uses a claim named *first_name*. |
-In the following example, the **AAD-UserWriteUsingLogonEmail** technical profile or the [starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/SocialAndLocalAccounts), which creates new local account, persists following claims:
+In the following example, the **AAD-UserWriteUsingLogonEmail** technical profile or the [starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/SocialAndLocalAccounts), which creates new local account, persists the following claims:
```xml <PersistedClaims>
In the following example, the **AAD-UserWriteUsingLogonEmail** technical profile
## Output claims
-The **OutputClaims** are the collection of claims that are returned back to the claims bag after the technical profile is completed. You can use those claims in the next orchestrations step, or output claims transformations. The **OutputClaims** element contains the following element:
+The **OutputClaims** element is a collection of claims that are returned back to the claims bag after the technical profile is completed. You can use those claims in the next orchestrations step or output claims transformations. The **OutputClaims** element contains the following element:
| Element | Occurrences | Description | | - | -- | -- |
The **OutputClaim** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | ClaimTypeReferenceId | Yes | The identifier of a claim type already defined in the ClaimsSchema section in the policy file or parent policy file. |
-| DefaultValue | No | A default value to use to create a claim if the claim does not exist. |
-|AlwaysUseDefaultValue |No |Force the use of the default value. |
-| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the partner claim type attribute is not specified, the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. For example, the first claim name is 'givenName', while the partner uses a claim named 'first_name'. |
+| DefaultValue | No | A default value to use to create a claim if the claim doesn't exist. |
+|AlwaysUseDefaultValue |No |Forces the use of the default value. |
+| PartnerClaimType | No | The identifier of the claim type of the external partner that the specified policy claim type maps to. If the partner claim type attribute isn't specified, the specified policy claim type is mapped to the partner claim type of the same name. Use this property when your claim type name is different from the other party. An example is if the first claim name is *givenName*, while the partner uses a claim named *first_name*. |
## Output claims transformations
-The **OutputClaimsTransformations** element may contain a collection of **OutputClaimsTransformation** elements. The output claims transformations are used to modify the output claims or generate new ones. After execution, the output claims are put back in the claims bag. You can use those claims in the next orchestrations step.
+The **OutputClaimsTransformations** element might contain a collection of **OutputClaimsTransformation** elements. The output claims transformations are used to modify the output claims or generate new ones. After execution, the output claims are put back in the claims bag. You can use those claims in the next orchestrations step.
-The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation, allowing you to have a sequence of claims transformation depending on each other.
+The output claims of a previous claims transformation in the claims transformation collection can be input claims of a subsequent input claims transformation. In this way, you can have a sequence of claims transformations that depend on each other.
The **OutputClaimsTransformations** element contains the following element:
The **OutputClaimsTransformation** element contains the following attribute:
| | -- | -- | | ReferenceId | Yes | An identifier of a claims transformation already defined in the policy file or parent policy file. |
-The following technical profile references the AssertAccountEnabledIsTrue claims transformation to evaluate whether the account is enabled or not after reading the `accountEnabled` claim from the directory.
+The following technical profile references the AssertAccountEnabledIsTrue claims transformation to evaluate whether the account is enabled or not after reading the `accountEnabled` claim from the directory.
```xml <TechnicalProfile Id="AAD-UserReadUsingEmailAddress">
The following technical profile references the AssertAccountEnabledIsTrue claims
## Validation technical profiles
-A validation technical profile is used for validating output claims in a [self-asserted technical profile](self-asserted-technical-profile.md#validation-technical-profiles). A validation technical profile is an ordinary technical profile from any protocol, such as [Azure Active Directory](active-directory-technical-profile.md) or a [REST API](restful-technical-profile.md). The validation technical profile returns output claims, or returns error code. The error message is rendered to the user on screen, allowing the user to retry.
+A validation technical profile is used for validating output claims in a [self-asserted technical profile](self-asserted-technical-profile.md#validation-technical-profiles). A validation technical profile is an ordinary technical profile from any protocol, such as [Azure AD](active-directory-technical-profile.md) or a [REST API](restful-technical-profile.md). The validation technical profile returns output claims or returns error code. The error message is rendered to the user on the screen, which allows the user to retry.
-The following diagram illustrates how Azure AD B2C uses a validation technical profile to validate the user credentials
+The following diagram illustrates how Azure AD B2C uses a validation technical profile to validate the user credentials.
-![Diagram validation technical profile flow](./media/technical-profiles/validation-technical-profile.png)
+![Diagram that shows a validation technical profile flow.](./media/technical-profiles/validation-technical-profile.png)
The **ValidationTechnicalProfiles** element contains the following element:
The **ValidationTechnicalProfile** element contains the following attribute:
## SubjectNamingInfo
-The **SubjectNamingInfo** defines the subject name used in tokens in a [relying party policy](relyingparty.md#subjectnaminginfo). The **SubjectNamingInfo** contains the following attribute:
+The **SubjectNamingInfo** element defines the subject name used in tokens in a [relying party policy](relyingparty.md#subjectnaminginfo). The **SubjectNamingInfo** element contains the following attribute:
| Attribute | Required | Description | | | -- | -- |
The **SubjectNamingInfo** defines the subject name used in tokens in a [relying
## Include technical profile
-A technical profile can include another technical profile to change settings or add new functionality. The **IncludeTechnicalProfile** element is a reference to the common technical profile from which a technical profile is derived. To reduce redundancy and complexity of your policy elements, use inclusion when you have multiple technical profiles that share the core elements. Use a common technical profile with the common set of configuration, along with specific task technical profiles that include the common technical profile.
+A technical profile can include another technical profile to change settings or add new functionality. The **IncludeTechnicalProfile** element is a reference to the common technical profile from which a technical profile is derived. To reduce redundancy and complexity of your policy elements, use inclusion when you have multiple technical profiles that share the core elements. Use a common technical profile with the common set of configuration, along with specific task technical profiles that include the common technical profile.
-Suppose you have a [REST API technical profile](restful-technical-profile.md) with a single endpoint where you need to send different set of claims for different scenarios. Create a common technical profile with the shared functionality, such as, the REST API endpoint URI, metadata, authentication type, and cryptographic keys. Create specific task technical profiles that include the common technical profile. Then add the input claims, output claims, or overwrite the REST API endpoint URI relevant to that technical profile.
+Suppose you have a [REST API technical profile](restful-technical-profile.md) with a single endpoint where you need to send different sets of claims for different scenarios. Create a common technical profile with the shared functionality, such as the REST API endpoint URI, metadata, authentication type, and cryptographic keys. Create specific task technical profiles that include the common technical profile. Then add the input and output claims, or overwrite the REST API endpoint URI relevant to that technical profile.
The **IncludeTechnicalProfile** element contains the following attribute: | Attribute | Required | Description | | | -- | -- |
-| ReferenceId | Yes | An identifier of a technical profile already defined in the policy file, or parent policy file. |
-
+| ReferenceId | Yes | An identifier of a technical profile already defined in the policy file or parent policy file. |
The following example illustrates the use of the inclusion: -- *REST-API-Common* - a common technical profile with the basic configuration.-- *REST-ValidateProfile* - includes the *REST-API-Common* technical profile, and specifies the input and output claims.-- *REST-UpdateProfile* - includes the *REST-API-Common* technical profile, specifies the input claims, and overwrites the `ServiceUrl` metadata.
+- **REST-API-Common**: A common technical profile with the basic configuration.
+- **REST-ValidateProfile**: Includes the **REST-API-Common** technical profile and specifies the input and output claims.
+- **REST-UpdateProfile**: Includes the **REST-API-Common** technical profile, specifies the input claims, and overwrites the `ServiceUrl` metadata.
```xml <ClaimsProvider>
The following example illustrates the use of the inclusion:
</ClaimsProvider> ```
-### Multi level inclusion
+### Multilevel inclusion
-A technical profile can include a single technical profile. There is no limit on the number of levels of inclusion. For example, the **AAD-UserReadUsingAlternativeSecurityId-NoError** technical profile includes the **AAD-UserReadUsingAlternativeSecurityId**. This technical profile sets the `RaiseErrorIfClaimsPrincipalDoesNotExist` metadata item to `true`, and raises an error if a social account does not exist in the directory. **AAD-UserReadUsingAlternativeSecurityId-NoError** overrides this behavior, and disables that error message.
+A technical profile can include a single technical profile. There's no limit on the number of levels of inclusion. For example, the **AAD-UserReadUsingAlternativeSecurityId-NoError** technical profile includes **AAD-UserReadUsingAlternativeSecurityId**. This technical profile sets the `RaiseErrorIfClaimsPrincipalDoesNotExist` metadata item to `true` and raises an error if a social account doesn't exist in the directory. **AAD-UserReadUsingAlternativeSecurityId-NoError** overrides this behavior and disables that error message.
```xml <TechnicalProfile Id="AAD-UserReadUsingAlternativeSecurityId-NoError">
A technical profile can include a single technical profile. There is no limit on
</TechnicalProfile> ```
-Both **AAD-UserReadUsingAlternativeSecurityId-NoError** and **AAD-UserReadUsingAlternativeSecurityId** don't specify the required **Protocol** element, because it's specified in the **AAD-Common** technical profile.
+Both **AAD-UserReadUsingAlternativeSecurityId-NoError** and **AAD-UserReadUsingAlternativeSecurityId** don't specify the required **Protocol** element because it's specified in the **AAD-Common** technical profile.
```xml <TechnicalProfile Id="AAD-Common">
Both **AAD-UserReadUsingAlternativeSecurityId-NoError** and **AAD-UserReadUsing
## Use technical profile for session management
-The **UseTechnicalProfileForSessionManagement** element reference to [Single sign-on session technical profile](custom-policy-reference-sso.md). The **UseTechnicalProfileForSessionManagement** element contains the following attribute:
+The **UseTechnicalProfileForSessionManagement** element references the [SSO session technical profile](custom-policy-reference-sso.md). The **UseTechnicalProfileForSessionManagement** element contains the following attribute:
| Attribute | Required | Description | | | -- | -- |
The **UseTechnicalProfileForSessionManagement** element reference to [Single sig
## Enabled for user journeys
-The [ClaimsProviderSelections](userjourneys.md#claims-provider-selection) in a user journey defines the list of claims provider selection options and their order. With the **EnabledForUserJourneys** element you filter, which claims provider is available to the user. The **EnabledForUserJourneys** element contains one of the following values:
+The [ClaimsProviderSelections](userjourneys.md#claims-provider-selection) in a user journey defines the list of claims provider selection options and their order. With the **EnabledForUserJourneys** element, you filter which claims provider is available to the user. The **EnabledForUserJourneys** element contains one of the following values:
-- **Always**, execute the technical profile.-- **Never**, skip the technical profile.-- **OnClaimsExistence** execute only when a certain claim, specified in the technical profile exists.-- **OnItemExistenceInStringCollectionClaim**, execute only when an item exists in a string collection claim.-- **OnItemAbsenceInStringCollectionClaim** execute only when an item does not exist in a string collection claim.
+- **Always**: Executes the technical profile.
+- **Never**: Skips the technical profile.
+- **OnClaimsExistence**: Executes only when a certain claim specified in the technical profile exists.
+- **OnItemExistenceInStringCollectionClaim**: Executes only when an item exists in a string collection claim.
+- **OnItemAbsenceInStringCollectionClaim**: Executes only when an item doesn't exist in a string collection claim.
-Using **OnClaimsExistence**, **OnItemExistenceInStringCollectionClaim**, or **OnItemAbsenceInStringCollectionClaim**, requires you to provide the following metadata:
+Using **OnClaimsExistence**, **OnItemExistenceInStringCollectionClaim**, or **OnItemAbsenceInStringCollectionClaim** requires you to provide the following metadata:
-- **ClaimTypeOnWhichToEnable** - specifies the claim's type that is to be evaluated.-- **ClaimValueOnWhichToEnable** - specifies the value that is to be compared.
+- **ClaimTypeOnWhichToEnable**: Specifies the claim's type that's to be evaluated.
+- **ClaimValueOnWhichToEnable**: Specifies the value that's to be compared.
The following technical profile is executed only if the **identityProviders** string collection contains the value of `facebook.com`:
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/troubleshoot-with-application-insights.md
Previously updated : 10/16/2020 Last updated : 03/10/2021
If you don't already have one, create an instance of Application Insights in you
UserJourneyRecorderEndpoint="urn:journeyrecorder:applicationinsights" ```
-1. If it doesn't already exist, add a `<UserJourneyBehaviors>` child node to the `<RelyingParty>` node. It must be located immediately after `<DefaultUserJourney ReferenceId="UserJourney Id" from your extensions policy, or equivalent (for example:SignUpOrSigninWithAAD" />`.
+1. If it doesn't already exist, add a `<UserJourneyBehaviors>` child node to the `<RelyingParty>` node. It must be located after `<DefaultUserJourney ReferenceId="UserJourney Id" from your extensions policy, or equivalent (for example:SignUpOrSigninWithAAD" />`.
1. Add the following node as a child of the `<UserJourneyBehaviors>` element. Make sure to replace `{Your Application Insights Key}` with the Application Insights **Instrumentation Key** that you recorded earlier. ```xml
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-custom-attributes.md
Previously updated : 03/04/2021 Last updated : 03/10/2021
Your Azure AD B2C directory comes with a [built-in set of attributes](user-profi
* An identity provider has a unique user identifier, **uniqueUserGUID**, that must be persisted. * A custom user journey needs to persist the state of the user, **migrationStatus**, for other logic to operate on.
+The terms *extension property*, *custom attribute*, and *custom claim* refer to the same thing in the context of this article. The name varies depending on the context, such as application, object, or policy.
+ Azure AD B2C allows you to extend the set of attributes stored on each user account. You can also read and write these attributes by using the [Microsoft Graph API](microsoft-graph-operations.md). ## Prerequisites
Once you've created a new user using a user flow, which uses the newly created c
## Azure AD B2C extensions app
-Extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called b2c-extensions-app. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure AD B2C, app registrations.
-
-The terms *extension property*, *custom attribute*, and *custom claim* refer to the same thing in the context of this article. The name varies depending on the context, such as application, object, or policy.
-
-## Get the application properties
+Extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called `b2c-extensions-app`. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure AD B2C, app registrations. Get the application properties:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
The terms *extension property*, *custom attribute*, and *custom claim* refer to
* **Application ID**. Example: `11111111-1111-1111-1111-111111111111`. * **Object ID**. Example: `22222222-2222-2222-2222-222222222222`.
-## Using custom attribute with MS Graph API
-
-Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application. Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
-
-```json
-"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
-```
- ::: zone pivot="b2c-custom-policy" ## Modify your custom policy
The following example demonstrates the use of a custom attribute in Azure AD B2C
::: zone-end
+## Using custom attribute with MS Graph API
+
+Microsoft Graph API supports creating and updating a user with extension attributes. Extension attributes in the Graph API are named by using the convention `extension_ApplicationClientID_attributename`, where the `ApplicationClientID` is the **Application (client) ID** of the `b2c-extensions-app` application. Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example:
+
+```json
+"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyId": "212342"
+```
+ ## Next steps Follow the guidance for how to [add claims and customize user input using custom policies](configure-user-input.md). This sample uses a built-in claim 'city'. To use a custom attribute, replace 'city' with your own custom attributes.
active-directory-domain-services Administration Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/administration-concepts.md
Previously updated : 06/05/2020 Last updated : 03/10/2021
In Azure AD DS, the available performance and features are based on the SKU. You
| SKU name | Maximum object count | Backup frequency | Maximum number of outbound forest trusts | ||-||-|
-| Standard | Unlimited | Every 7 days | 0 |
+| Standard | Unlimited | Every 5 days | 0 |
| Enterprise | Unlimited | Every 3 days | 5 | | Premium | Unlimited | Daily | 10 |
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/fido2-compatibility.md
The following are the minimum browser version requirements.
| - | - | | Chrome | 76 | | Edge | Windows 10 version 1903<sup>1</sup> |
-| Firefox | Chrome |
-| ChromeOS | 66 |
+| Firefox | 66 |
<sup>1</sup>All versions of the new Chromium-based Microsoft Edge support Fido2. Support on Microsoft Edge legacy was added in 1903.
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
Previously updated : 08/03/2020 Last updated : 03/04/2021
The following options are available to include when creating a Conditional Acces
- All guest and external users - This selection includes any B2B guests and external users including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP). - Directory roles
- - Allows administrators to select specific Azure AD directory roles used to determine assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role.
+ - Allows administrators to select specific built-in Azure AD directory roles used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. Other role types are not supported, including administrative unit-scoped directory roles, custom roles.
- Users and groups - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Previously updated : 08/03/2020 Last updated : 03/04/2021
The following steps will help create a Conditional Access policy to require thos
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users and groups**
- 1. Under **Include**, select **Directory roles (preview)** and choose the following roles at a minimum:
+ 1. Under **Include**, select **Directory roles** and choose built-in roles like:
* Authentication Administrator * Billing administrator * Conditional Access administrator
The following steps will help create a Conditional Access policy to require thos
* User administrator > [!WARNING]
- > Conditional Access policies do not support users assigned a directory role [scoped to an administrative unit](../roles/admin-units-assign-roles.md) or directory roles scoped directly to an object, like through [custom roles](../roles/custom-create.md).
+ > Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md).
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**, and select **Done**.
-1. Under **Conditions** > **Client apps**, switch **Configure** to **Yes** and under **Select the client apps this policy will apply to** leave all defaults selected and select **Done**.
1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**. 1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
active-directory Active Directory Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-graph-api.md
- Title: Azure Active Directory Graph API
-description: An overview and quickstart guide for Azure AD Graph API, which allows programmatic access to Azure AD through REST API endpoints.
-------- Previously updated : 11/26/2019-----
-# Azure Active Directory Graph API
-
-> [!IMPORTANT]
-> We strongly recommend that you use [Microsoft Graph](https://developer.microsoft.com/graph) instead of Azure AD Graph API to access Azure Active Directory (Azure AD) resources. Our development efforts are now concentrated on Microsoft Graph and no further enhancements are planned for Azure AD Graph API. There are a very limited number of scenarios for which Azure AD Graph API might still be appropriate; for more information, see the [Microsoft Graph or the Azure AD Graph](https://developer.microsoft.com/office/blogs/microsoft-graph-or-azure-ad-graph/) blog post and [Migrate Azure AD Graph apps to Microsoft Graph](/graph/migrate-azure-ad-graph-planning-checklist).
-
-This article applies to Azure AD Graph API. For similar info related to Microsoft Graph API, see [Use the Microsoft Graph API](/graph/use-the-api).
-
-The Azure Active Directory Graph API provides programmatic access to Azure AD through REST API endpoints. Applications can use Azure AD Graph API to perform create, read, update, and delete (CRUD) operations on directory data and objects. For example, Azure AD Graph API supports the following common operations for a user object:
-
-* Create a new user in a directory
-* Get a user's detailed properties, such as their groups
-* Update a user's properties, such as their location and phone number, or change their password
-* Check a user's group membership for role-based access
-* Disable a user's account or delete it entirely
-
-Additionally, you can perform similar operations on other objects such as groups and applications. To call Azure AD Graph API on a directory, your application must be registered with Azure AD. Your application must also be granted access to Azure AD Graph API. This access is normally achieved through a user or admin consent flow.
-
-To begin using the Azure Active Directory Graph API, see the [Azure AD Graph API quickstart guide](./microsoft-graph-intro.md), or view the [interactive Azure AD Graph API reference documentation](/previous-versions/azure/ad/graph/api/api-catalog).
-
-## Features
-
-Azure AD Graph API provides the following features:
-
-* **REST API Endpoints**: Azure AD Graph API is a RESTful service comprised of endpoints that are accessed using standard HTTP requests. Azure AD Graph API supports XML or Javascript Object Notation (JSON) content types for requests and responses. For more information, see [Azure AD Graph REST API reference](/previous-versions/azure/ad/graph/api/api-catalog).
-* **Authentication with Azure AD**: Every request to Azure AD Graph API must be authenticated by appending a JSON Web Token (JWT) in the Authorization header of the request. This token is acquired by making a request to Azure AD's token endpoint and providing valid credentials. You can use the OAuth 2.0 client credentials flow or the authorization code grant flow to acquire a token to call the Graph. For more information, [OAuth 2.0 in Azure AD](/previous-versions/azure/dn645545(v=azure.100)).
-* **Role-Based Authorization (RBAC)**: Security groups are used to perform RBAC in Azure AD Graph API. For example, if you want to determine whether a user has access to a specific resource, the application can call the [Check group membership (transitive)](/previous-versions/azure/ad/graph/api/functions-and-actions#checkMemberGroups) operation, which returns true or false.
-* **Differential Query**: Differential query allows you to track changes in a directory between two time periods without having to make frequent queries to Azure AD Graph API. This type of request will return only the changes made between the previous differential query request and the current request. For more information, see [Azure AD Graph API differential query](/previous-versions/azure/ad/graph/howto/azure-ad-graph-api-differential-query).
-* **Directory Extensions**: You can add custom properties to directory objects without requiring an external data store. For example, if your application requires a Skype ID property for each user, you can register the new property in the directory and it will be available for use on every user object. For more information, see [Azure AD Graph API directory schema extensions](/previous-versions/azure/ad/graph/howto/azure-ad-graph-api-directory-schema-extensions).
-* **Secured by permission scopes**: Azure AD Graph API exposes permission scopes that enable secure access to Azure AD data using OAuth 2.0. It supports a variety of client app types, including:
-
- * user interfaces that are given delegated access to data via authorization from the signed-in user (delegated)
- * service/daemon applications that operate in the background without a signed-in user being present and use application-defined role-based access control
-
- Both delegated and application permissions represent a privilege exposed by the Azure AD Graph API and can be requested by client applications through application registration permissions features in the [Azure portal](https://portal.azure.com). [Azure AD Graph API permission scopes](/previous-versions/azure/ad/graph/howto/azure-ad-graph-api-permission-scopes) provides information on what's available for use by your client application.
-
-## Scenarios
-
-Azure AD Graph API enables many application scenarios. The following scenarios are the most common:
-
-* **Line of Business (Single Tenant) Application**: In this scenario, an enterprise developer works for an organization that has an Office 365 subscription. The developer is building a web application that interacts with Azure AD to perform tasks such as assigning a license to a user. This task requires access to the Azure AD Graph API, so the developer registers the single tenant application in Azure AD and configures read and write permissions for Azure AD Graph API. Then the application is configured to use either its own credentials or those of the currently sign-in user to acquire a token to call the Azure AD Graph API.
-* **Software as a Service Application (Multi-Tenant)**: In this scenario, an independent software vendor (ISV) is developing a hosted multi-tenant web application that provides user management features for other organizations that use Azure AD. These features require access to directory objects, so the application needs to call the Azure AD Graph API. The developer registers the application in Azure AD, configures it to require read and write permissions for Azure AD Graph API, and then enables external access so that other organizations can consent to use the application in their directory. When a user in another organization authenticates to the application for the first time, they are shown a consent dialog with the permissions the application is requesting. Granting consent will then give the application those requested permissions to Azure AD Graph API in the user's directory. For more information on the consent framework, see [Overview of the consent framework](consent-framework.md).
-
-## Next steps
-
-To begin using the Azure Active Directory Graph API, see the following topics:
-
-* [Azure AD Graph API quickstart guide](./microsoft-graph-intro.md)
-* [Azure AD Graph REST documentation](/previous-versions/azure/ad/graph/api/api-catalog)
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Once you've configured your app to enable user assignment, you can go ahead and
- [How to: Add app roles in your application](./howto-add-app-roles-in-azure-ad-apps.md) - [Add authorization using app roles & roles claims to an ASP.NET Core web app](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/5-WebApp-AuthZ/5-1-Roles)-- [Using Security Groups and Application Roles in your apps (Video)](https://www.youtube.com/watch?v=V8VUPixLSiM)
+- [Using Security Groups and Application Roles in your apps (Video)](https://www.youtube.com/watch?v=LRoc-na27l0)
- [Azure Active Directory, now with Group Claims and Application Roles](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-Active-Directory-now-with-Group-Claims-and-Application/ba-p/243862) - [Azure Active Directory app manifest](./reference-app-manifest.md)
active-directory Msal Net Use Brokers With Xamarin Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
The forward-slash (`/`) in front of the signature in the `android:path` value is
android:path="/hgbUYHVBYUTvuvT&Y6tr554365466="/> ```
+For more information about configuring your application for system browser and Android 11 support, see [Update the Android manifest for system browser support](msal-net-xamarin-android-considerations.md#update-the-android-manifest).
+ As an alternative, you can configure MSAL to fall back to the embedded browser, which doesn't rely on a redirect URI: ```csharp
Here are a few tips on avoiding issues when you implement brokered authenticatio
Example: If you first install Microsoft Authenticator and then install Intune Company Portal, brokered authentication will *only* happen on the Microsoft Authenticator. - **Logs** - If you encounter an issue with brokered authentication, viewing the broker's logs might help you diagnose the cause.
- - View Microsoft Authenticator logs:
+ - Get Microsoft Authenticator logs:
1. Select the menu button in the top-right corner of the app.
- 1. Select **Help** > **Send Logs** > **View Logs**.
- 1. Select **Copy All** to copy the broker logs to the device's clipboard.
+ 1. Select **Send Feedback** > **Having Trouble?**.
+ 1. Under **What are you trying to do?**, select an option and add a description.
+ 1. To send the logs, select the arrow in the top-right corner of the app.
- The best way to debug with these logs is to email them to yourself and view them on your development machine. You might find it easier to parse the logs on your computer instead of on the device itself. You can also use a test editor on Android to save the logs as a text file, and then use a USB cable to copy the file to a computer.
+ After you send the logs, a dialog box displays the incident ID. Record the incident ID, and include it when you request assistance.
- - View Intune Company Portal logs:
+ - Get Intune Company Portal logs:
- 1. Select the menu button on the top-left corner of the app
- 1. Select **Settings** > **Diagnostic Data**
- 1. Select **Copy Logs** to copy the broker logs to the device's SD card.
- 1. Connect the device to a computer by using a USB cable to view the logs on your development machine.
+ 1. Select the menu button on the top-left corner of the app.
+ 1. Select **Help** > **Email Support**.
+ 1. To send the logs, select **Upload Logs Only**.
- Once you have the logs, you can search through them for your authentication attempts via correlation ID. The correlation ID is attached to every authentication request. To find errors returned by the Microsoft identity platform authentication endpoint, search for `AADSTS`.
+ After you send the logs, a dialog box displays the incident ID. Record the incident ID, and include it when you request assistance.
## Next steps
active-directory Msal Net Xamarin Android Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-xamarin-android-considerations.md
protected override void OnActivityResult(int requestCode,
} ```
-## Update the Android manifest
-
-The *AndroidManifest.xml* file should contain the following values:
-
-```XML
- <!--Intent filter to capture System Browser or Authenticator calling back to our app after sign-in-->
- <activity
- android:name="microsoft.identity.client.BrowserTabActivity">
- <intent-filter>
- <action android:name="android.intent.action.VIEW" />
- <category android:name="android.intent.category.DEFAULT" />
- <category android:name="android.intent.category.BROWSABLE" />
- <data android:scheme="msauth"
- android:host="Enter_the_Package_Name"
- android:path="/Enter_the_Signature_Hash" />
- </intent-filter>
- </activity>
+## Update the Android manifest for System WebView support
+
+To support System WebView, the *AndroidManifest.xml* file should contain the following values:
+
+```xml
+<activity android:name="microsoft.identity.client.BrowserTabActivity" android:configChanges="orientation|screenSize">
+ <intent-filter>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.DEFAULT" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="msal{Client Id}" android:host="auth" />
+ </intent-filter>
+</activity>
```
-Substitute the package name that you registered in the Azure portal for the `android:host=` value. Substitute the key hash that you registered in the Azure portal for the `android:path=` value. The signature hash should *not* be URL encoded. Ensure that a leading forward slash (`/`) appears at the beginning of your signature hash.
+The `android:scheme` value is created from the redirect URI that's configured in the application portal. For example, if your redirect URI is `msal4a1aa1d5-c567-49d0-ad0b-cd957a47f842://auth`, the `android:scheme` entry in the manifest would look like this example:
+
+```xml
+<data android:scheme="msal4a1aa1d5-c567-49d0-ad0b-cd957a47f842" android:host="auth" />
+```
Alternatively, [create the activity in code](/xamarin/android/platform/android-manifest#the-basics) rather than manually editing *AndroidManifest.xml*. To create the activity in code, first create a class that includes the `Activity` attribute and the `IntentFilter` attribute.
Here's an example of a class that represents the values of the XML file:
} ```
+### Use System WebView in brokered authentication
+
+To use System WebView as a fallback for interactive authentication when you've configured your application to use brokered authentication and the device doesn't have a broker installed, enable MSAL to capture the authentication response by using the broker's redirect URI. MSAL will try to authenticate by using the default System WebView on the device when it detects that the broker is unavailable. Using this default, it will fail because the redirect URI is configured to use a broker, and System WebView doesn't know how to use it to return to MSAL. To resolve this, create an _intent filter_ by using the broker redirect URI that you configured earlier. Add the intent filter by modifying your application's manifest like this example:
+
+```xml
+<!--Intent filter to capture System WebView or Authenticator calling back to our app after sign-in-->
+<activity
+ android:name="microsoft.identity.client.BrowserTabActivity">
+ <intent-filter>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.DEFAULT" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="msauth"
+ android:host="Enter_the_Package_Name"
+ android:path="/Enter_the_Signature_Hash" />
+ </intent-filter>
+</activity>
+```
+
+Substitute the package name that you registered in the Azure portal for the `android:host=` value. Substitute the key hash that you registered in the Azure portal for the `android:path=` value. The signature hash should *not* be URL encoded. Ensure that a leading forward slash (`/`) appears at the beginning of your signature hash.
+ ### Xamarin.Forms 4.3.x manifest Xamarin.Forms 4.3.x generates code that sets the `package` attribute to `com.companyname.{appName}` in *AndroidManifest.xml*. If you use `DataScheme` as `msal{client_id}`, then you might want to change the value to match the value of the `MainActivity.cs` namespace.
+## Android 11 support
+
+To use the system browser and brokered authentication in Android 11, you must first declare these packages, so they are visible to the app. Apps that target Android 10 (API 29) and earlier can query the OS for a list of packages that are available on the device at any given time. To support privacy and security, Android 11 reduces package visibility to a default list of OS packages and the packages that are specified in the app's *AndroidManifest.xml* file.
+
+To enable the application to authenticate by using both the system browser and the broker, add the following section to *AndroidManifest.xml*:
+
+```xml
+<!-- Required for API Level 30 to make sure the app can detect browsers and other apps where communication is needed.-->
+<!--https://developer.android.com/training/basics/intents/package-visibility-use-cases-->
+<queries>
+ <package android:name="com.azure.authenticator" />
+ <package android:name="{Package Name}" />
+ <package android:name="com.microsoft.windowsintune.companyportal" />
+ <!-- Required for API Level 30 to make sure the app detect browsers
+ (that don't support custom tabs) -->
+ <intent>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="https" />
+ </intent>
+ <!-- Required for API Level 30 to make sure the app can detect browsers that support custom tabs -->
+ <!-- https://developers.google.com/web/updates/2020/07/custom-tabs-android-11#detecting_browsers_that_support_custom_tabs -->
+ <intent>
+ <action android:name="android.support.customtabs.action.CustomTabsService" />
+ </intent>
+</queries>
+```
+
+Replace `{Package Name}` with the application package name.
+
+Your updated manifest, which now includes support for the system browser and brokered authentication, should look similar to this example:
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package="com.companyname.XamarinDev">
+ <uses-sdk android:minSdkVersion="21" android:targetSdkVersion="30" />
+ <uses-permission android:name="android.permission.INTERNET" />
+ <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
+ <application android:theme="@android:style/Theme.NoTitleBar">
+ <activity android:name="microsoft.identity.client.BrowserTabActivity" android:configChanges="orientation|screenSize">
+ <intent-filter>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.DEFAULT" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="msal4a1aa1d5-c567-49d0-ad0b-cd957a47f842" android:host="auth" />
+ </intent-filter>
+ <intent-filter>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.DEFAULT" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="msauth" android:host="com.companyname.XamarinDev" android:path="/Fc4l/5I4mMvLnF+l+XopDuQ2gEM=" />
+ </intent-filter>
+ </activity>
+ </application>
+ <!-- Required for API Level 30 to make sure we can detect browsers and other apps we want to
+ be able to talk to.-->
+ <!--https://developer.android.com/training/basics/intents/package-visibility-use-cases-->
+ <queries>
+ <package android:name="com.azure.authenticator" />
+ <package android:name="com.companyname.xamarindev" />
+ <package android:name="com.microsoft.windowsintune.companyportal" />
+ <!-- Required for API Level 30 to make sure we can detect browsers
+ (that don't support custom tabs) -->
+ <intent>
+ <action android:name="android.intent.action.VIEW" />
+ <category android:name="android.intent.category.BROWSABLE" />
+ <data android:scheme="https" />
+ </intent>
+ <!-- Required for API Level 30 to make sure we can detect browsers that support custom tabs -->
+ <!-- https://developers.google.com/web/updates/2020/07/custom-tabs-android-11#detecting_browsers_that_support_custom_tabs -->
+ <intent>
+ <action android:name="android.support.customtabs.action.CustomTabsService" />
+ </intent>
+ </queries>
+</manifest>
+```
+ ## Use the embedded web view (optional) By default, MSAL.NET uses the system web browser. This browser enables you to get single sign-on (SSO) by using web applications and other apps. In some rare cases, you might want your system to use an embedded web view.
active-directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/faq.md
See below on how these actions can be rectified.
### Q: I cannot add more than 3 Azure AD user accounts under the same user session on a Windows 10 device, why?
-**A**: Azure AD added support for multiple Azure AD accounts in Windows 10 1803 release. However, Windows 10 restricts the number of Azure AD accounts on a device to 3 to limit the size of token requests and enable reliable single sign on (SSO). Once 3 accounts have been added, users will see an error for subsequent accounts. The Additional problem information on the error screen provides the following message indicating the reason - "Add account operation is blocked because accout limit is reached".
+**A**: Azure AD added support for multiple Azure AD accounts in Windows 10 1803 release. However, Windows 10 restricts the number of Azure AD accounts on a device to 3 to limit the size of token requests and enable reliable single sign on (SSO). Once 3 accounts have been added, users will see an error for subsequent accounts. The Additional problem information on the error screen provides the following message indicating the reason - "Add account operation is blocked because account limit is reached".
## Azure AD join FAQ
active-directory Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/api-connectors-overview.md
# Use API connectors to customize and extend self-service sign-up ## Overview
-As a developer or IT administrator, you can use API connectors to integrate your [self-service sign-up user flows](self-service-sign-up-overview.md) with external systems by leveraging web APIs. For example, you can use API connectors to:
+As a developer or IT administrator, you can use API connectors to integrate your [self-service sign-up user flows](self-service-sign-up-overview.md) with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can:
-- [**Integrate with a custom approval workflow**](self-service-sign-up-add-approvals.md). Connect to a custom approval system for managing account creation.
+- [**Integrate with a custom approval workflow**](self-service-sign-up-add-approvals.md). Connect to a custom approval system for managing and limiting account creation.
- [**Perform identity verification**](code-samples-self-service-sign-up.md#identity-verification). Use an identity verification service to add an extra level of security to account creation decisions. - **Validate user input data**. Validate against malformed or invalid user data. For example, you can validate user-provided data against existing data in an external data store or list of permitted values. If invalid, you can ask a user to provide valid data or block the user from continuing the sign-up flow. - **Overwrite user attributes**. Reformat or assign a value to an attribute collected from the user. For example, if a user enters the first name in all lowercase or all uppercase letters, you can format the name with only the first letter capitalized.
-<!-
- **Run custom business logic**. You can trigger downstream events in your cloud systems to send push notifications, update corporate databases, manage permissions, audit databases, and perform other custom actions.
-An API connector provides Azure Active Directory with the information needed to call API endpoint by defining the HTTP endpoint URL and authentication. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to re-enter information, or overwrite and append user attributes.
+An API connector provides Azure Active Directory with the information needed to call API endpoint by defining the HTTP endpoint URL and authentication for the API call. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to re-enter information, or overwrite and append user attributes.
## Where you can enable an API connector in a user flow
There are two places in a user flow where you can enable an API connector:
### After signing in with an identity provider
-An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (Google, Facebook, Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. The following are examples of API connector scenarios you might enable at this step:
+An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account. The following are examples of API connector scenarios you might enable at this step:
- Use the email or federated identity that the user provided to look up claims in an existing system. Return these claims from the existing system, pre-fill the attribute collection page, and make them available to return in the token.-- Validate whether the user is included in an allow or deny list, and control whether they can continue with the sign-up flow.
+- Implement an allow or block list based on social identity.
### Before creating the user
-An API connector at this step in the sign-up process is invoked after the attribute collection page, if one is included. This step is always invoked before a user account is created in Azure AD. The following are examples of scenarios you might enable at this point during sign-up:
+An API connector at this step in the sign-up process is invoked after the attribute collection page, if one is included. This step is always invoked before a user account is created. The following are examples of scenarios you might enable at this point during sign-up:
- Validate user input data and ask a user to resubmit data. - Block a user sign-up based on data entered by the user. - Perform identity verification. - Query external systems for existing data about the user to return it in the application token or store it in Azure AD.
-<!-- > [!IMPORTANT]
-> If an invalid response is returned or another error occurs (for example, a network error), the user will be redirected to the app with the error re -->
- ## Next steps - Learn how to [add an API connector to a user flow](self-service-sign-up-add-api-connector.md) - Learn how to [add a custom approval system to self-service sign-up](self-service-sign-up-add-approvals.md)
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
When a user clicks the **Accept invitation** link in an [invitation email](invit
![Screenshot showing the redemption flow diagram](media/redemption-experience/invitation-redemption-flow.png)
-**If the userΓÇÖs User principle name (UPN) matches with both an existing Azure AD and personal MSA account, the user will be prompted to choose which account they want to redeem with.*
+**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal MSA account, the user will be prompted to choose which account they want to redeem with.*
1. Azure AD performs user-based discovery to determine if the user exists in an [existing Azure AD tenant](./what-is-b2b.md#easily-invite-guest-users-from-the-azure-ad-portal).
In your directory, the guest's **Invitation accepted** value changes to **Yes**.
- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md) - [How do information workers add B2B collaboration users to Azure Active Directory?](add-users-information-worker.md) - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell)-- [Leave an organization as a guest user](leave-the-organization.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
To use an [API connector](api-connectors-overview.md), you first create the API
## Create an API connector
-1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. 4. Select **All API connectors**, and then select **New API connector**.
To use an [API connector](api-connectors-overview.md), you first create the API
5. Provide a display name for the call. For example, **Check approval status**. 6. Provide the **Endpoint URL** for the API call.
-7. Provide the authentication information for the API.
+7. Choose the **Authentication type** and configure the authentication information for calling your API. See the section below for options on securing your API.
- - Only Basic Authentication is currently supported. If you wish to use an API without Basic Authentication for development purposes, simply enter a dummy **Username** and **Password** that your API can ignore. For use with an Azure Function with an API key, you can include the code as a query parameter in the **Endpoint URL** (for example, `https://contoso.azurewebsites.net/api/endpoint?code=0123456789`).
+ ![Configure an API connector](./media/self-service-sign-up-add-api-connector/api-connector-config.png)
- ![Configure a new API connector](./media/self-service-sign-up-add-api-connector/api-connector-config.png)
8. Select **Save**.
+## Securing the API endpoint
+You can protect your API endpoint by using either HTTP basic authentication or HTTPS client certificate authentication (preview). In either case, you provide the credentials that Azure Active Directory will use when calling your API endpoint. Your API endpoint then checks the credentials and performs authorization decisions.
+
+### HTTP basic authentication
+HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Azure Active Directory sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then checks these values to determine whether to reject an API call or not.
+
+### HTTPS client certificate authentication (preview)
+ > [!IMPORTANT]
-> Previously, you had to configure which user attributes to send to the API ('Claims to send') and which user attributes to accept from the API ('Claims to receive'). Now, all user attributes are sent by default if they have a value and any user attribute can be returned by the API in a 'continuation' response.
+> This functionality is in preview and is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Client certificate authentication is a mutual certificate-based authentication, where the client provides a client certificate to the server to prove its identity. In this case, Azure Active Directory will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Only services that have proper certificates can access your API service. The client certificate is an X.509 digital certificate. In production environments, it should be signed by a certificate authority.
+
+To create a certificate, you can use [Azure Key Vault](../../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. You can then [export the certificate](../../key-vault/certificates/how-to-export-certificate.md) and upload it for use in the API connectors configuration. Note that password is only required for certificate files protected by a password. You can also use PowerShell's [New-SelfSignedCertificate cmdlet](../../active-directory-b2c/secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
+
+For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and validate the certificate from your API endpoint.
+
+It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **All API connectors** and click on **Upload new connector**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure Active Directory.
+
+### API Key
+Some services use an "API key" mechanism to make it harder to access your HTTP endpoints during development. For [Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
+
+This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
## The request sent to your API An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties.
Custom attributes exist in the **extension_\<extensions-app-id>_AttributeName**
Additionally, the **UI Locales ('ui_locales')** claim is sent by default in all requests. It provides a user's locale(s) as configured on their device that can be used by the API to return internationalized responses. > [!IMPORTANT]
-> If a claim to send does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check for the value it expects.
+> If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.
> [!TIP] > [**identities ('identities')**](/graph/api/resources/objectidentity) and the **Email Address ('email')** claims can be used by your API to identify a user before they have an account in your tenant. The 'identities' claim is sent when a user authenticates with an identity provider such as Google or Facebook. 'email' is always sent.
Follow these steps to add an API connector to a self-service sign-up user flow.
## After signing in with an identity provider
-An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (Google, Facebook, Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes.
-
-<!-- The following are examples of API connector scenarios you may enable at this step:
-- Use the email or federated identity that the user provided to look up claims in an existing system. Return these claims from the existing system, pre-fill the attribute collection page, and make them available to return in the token.-- Validate whether the user is included in an allow or deny list, and control whether they can continue with the sign-up flow. -->
+An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account.
### Example request sent to the API at this step ```http
See an example of a [blocking response](#example-of-a-blocking-response).
An API connector at this step in the sign-up process is invoked after the attribute collection page, if one is included. This step is always invoked before a user account is created in Azure AD.
-<!-- The following are examples of scenarios you might enable at this point during sign-up: -->
-<!--
-- Validate user input data and ask a user to resubmit data.-- Block a user sign-up based on data entered by the user.-- Perform identity verification.-- Query external systems for existing data about the user and overwrite the user-provided value. -->- ### Example request sent to the API at this step ```http
When the web API receives an HTTP request from Azure AD during a user flow, it c
- Validation response #### Continuation response- A continuation response indicates that the user flow should continue to the next step: create the user in the directory. In a continuation response, the API can return claims. If a claim is returned by the API, the claim does the following:
Content-type: application/json
| version | String | Yes | The version of the API. | | action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | Values can be stored in the directory if they selected as a **Claim to receive** in the API connector configuration and **User attributes** for a user flow. Values can be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The returned claim does not need to contain `_<extensions-app-id>_`. Values are be stored in the directory if they selected as a **Claim to receive** in the API connector configuration and **User attribute** for a user flow. Custom attributes cannot be sent back in the token. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The returned claim does not need to contain `_<extensions-app-id>_`. Returned values can overwrite values collected from a user. They can also be returned in the token if configured as part of the application. |
### Example of a blocking response
Content-type: application/json
"version": "1.0.0", "action": "ShowBlockPage", "userMessage": "There was a problem with your request. You are not able to sign up at this time.",
- "code": "CONTOSO-BLOCK-00"
} ```
Content-type: application/json
| version | String | Yes | The version of the API. | | action | String | Yes | Value must be `ShowBlockPage` | | userMessage | String | Yes | Message to display to the user. |
-| code | String | No | Error code. Can be used for debugging purposes. Not displayed to the user. |
**End-user experience with a blocking response**
Content-type: application/json
"status": 400, "action": "ValidationError", "userMessage": "Please enter a valid Postal Code.",
- "code": "CONTOSO-VALIDATION-00"
} ``` | Parameter | Type | Required | Description | | -- | - | -- | -- |
-| version | String | Yes | The version of the API. |
+| version | String | Yes | The version of your API. |
| action | String | Yes | Value must be `ValidationError`. | | status | Integer | Yes | Must be value `400` for a ValidationError response. | | userMessage | String | Yes | Message to display to the user. |
-| code | String | No | Error code. Can be used for debugging purposes. Not displayed to the user. |
+
+> [!NOTE]
+> HTTP status code has to be "400" in addition to the "status" value in the body of the response.
**End-user experience with a validation-error response**
Content-type: application/json
## Best practices and how to troubleshoot ### Using serverless cloud functions
-Serverless functions, like HTTP triggers in Azure Functions, provide a simple way create API endpoints to use with the API connector. You can use the serverless cloud function to, [for example](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts), perform validation logic and limit sign-ups to specific domains. The serverless cloud function can also call and invoke other web APIs, user stores, and other cloud services for more complex scenarios.
+Serverless functions, like HTTP triggers in Azure Functions, provide a simple way create API endpoints to use with the API connector. You can use the serverless cloud function to, [for example](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts), perform validation logic and limit sign-ups to specific email domains. The serverless cloud function can also call and invoke other web APIs, user stores, and other cloud services for more complex scenarios.
### Best practices Ensure that:
Ensure that:
* The **Endpoint URL** of the API connector points to the correct API endpoint. * Your API explicitly checks for null values of received claims. * Your API responds as quickly as possible to ensure a fluid user experience.
- * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." For Azure Functions, its recommended to use the [Premium plan](../../azure-functions/functions-premium-plan.md).
-
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." in production. For Azure Functions, its recommended to use the [Premium plan](../../azure-functions/functions-scale.md)
### Use logging In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance.
In general, it's helpful to use the logging tools enabled by your web API servic
* Monitor your API for long response times. ## Next steps
-<!-
- Learn how to [add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- Get started with our [Azure Function quickstart samples](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts).
-<!-
+- Get started with our [quickstart samples](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts).
active-directory Self Service Sign Up Add Approvals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-approvals.md
Content-type: application/json
"version": "1.0.0", "action": "ShowBlockPage", "userMessage": "Your access request is already processing. You'll be notified when your request has been approved.",
- "code": "CONTOSO-APPROVAL-PENDING"
} ```
Content-type: application/json
"version": "1.0.0", "action": "ShowBlockPage", "userMessage": "Your sign up request has been denied. Please contact an administrator if you believe this is an error",
- "code": "CONTOSO-APPROVAL-DENIED"
} ```
Content-type: application/json
"version": "1.0.0", "action": "ShowBlockPage", "userMessage": "Your account is now waiting for approval. You'll be notified when your request has been approved.",
- "code": "CONTOSO-APPROVAL-REQUESTED"
} ```
Content-type: application/json
"version": "1.0.0", "action": "ShowBlockPage", "userMessage": "Your sign up request has been denied. Please contact an administrator if you believe this is an error",
- "code": "CONTOSO-APPROVAL-AUTO-DENIED"
} ```
The `userMessage` in the response is displayed to the user, for example:
After obtaining manual approval, the custom approval system creates a [user](/graph/azuread-users-concept-overview) account by using [Microsoft Graph](/graph/use-the-api). The way your approval system provisions the user account depends on the identity provider that was used by the user.
-### For a federated Google or Facebook user
+### For a federated Google or Facebook user and email one-time passcode
> [!IMPORTANT]
-> The approval system should explicitly check that `identities`, `identities[0]` and `identities[0].issuer` are present and that `identities[0].issuer` equals 'facebook' or 'google' to use this method.
+> The approval system should explicitly check that `identities`, `identities[0]` and `identities[0].issuer` are present and that `identities[0].issuer` equals 'facebook', 'google' or 'mail' to use this method.
-If your user signed in with a Google or Facebook account, you can use the [User creation API](/graph/api/user-post-users?tabs=http).
+If your user signed in with a Google or Facebook account or email one-time passcode, you can use the [User creation API](/graph/api/user-post-users?tabs=http).
1. The approval system uses receives the HTTP request from the user flow.
Content-type: application/json
| \<otherBuiltInAttribute> | No | Other built-in attributes like `displayName`, `city`, and others. Parameter names are the same as the parameters sent by the API connector. | | \<extension\_\{extensions-app-id}\_CustomAttribute> | No | Custom attributes about the user. Parameter names are the same as the parameters sent by the API connector. |
-### For a federated Azure Active Directory user
+### For a federated Azure Active Directory user or Microsoft account user
-If a user signs in with a federated Azure Active Directory account, you must use the [invitation API](/graph/api/invitation-post) to create the user and then optionally the [user update API](/graph/api/user-update) to assign more attributes to the user.
+If a user signs in with a federated Azure Active Directory account or a Microsoft account, you must use the [invitation API](/graph/api/invitation-post) to create the user and then optionally the [user update API](/graph/api/user-update) to assign more attributes to the user.
1. The approval system receives the HTTP request from the user flow.
active-directory Protect M365 From On Premises Attacks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/protect-m365-from-on-premises-attacks.md
authentication decisions. For more information, see the
* Use Conditional Access to [block legacy authentication protocols](../conditional-access/howto-conditional-access-policy-block-legacy.md) whenever possible. Additionally, disable legacy authentication protocols at the application level by using an application-specific configuration.
- For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant?view=sharepoint-ps).
+ For more information, see [Legacy authentication protocols](../fundamentals/auth-sync-overview.md). Or see specific details for [Exchange Online](/exchange/clients-and-mobile-in-exchange-online/disable-basic-authentication-in-exchange-online#how-basic-authentication-works-in-exchange-online) and [SharePoint Online](/powershell/module/sharepoint-online/set-spotenant).
* Implement the recommended [identity and device access configurations](/microsoft-365/security/office-365-security/identity-access-policies).
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
If you need additional permissions or resources supported, which you don't curre
New provisioning logs are available to help you monitor and troubleshoot the user and group provisioning deployment. These new log files include information about: - What groups were successfully created in [ServiceNow](../saas-apps/servicenow-provisioning-tutorial.md)-- What roles were imported from [Amazon Web Services (AWS)](../saas-apps/amazon-web-service-tutorial.md#configure-and-test-azure-ad-sso-for-amazon-web-services-aws)
+- What roles were imported from [AWS Single-Account Access](../saas-apps/amazon-web-service-tutorial.md#configure-and-test-azure-ad-sso-for-aws-single-account-access)
- What employees weren't imported from [Workday](../saas-apps/workday-inbound-tutorial.md) For more information, see [Provisioning reports in the Azure Active Directory portal (preview)](../reports-monitoring/concept-provisioning-logs.md).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
For more information, read [Automate user provisioning to SaaS applications with
10 Azure AD built-in roles have been renamed so that they're aligned across the [Microsoft 365 admin center](https://docs.microsoft.com/microsoft-365/admin/microsoft-365-admin-center-preview), [Azure AD portal](https://portal.azure.com/), and [Microsoft Graph](https://developer.microsoft.com/graph/). To learn more about the new roles, refer to [Administrator role permissions in Azure Active Directory](../roles/permissions-reference.md#all-roles).
-![Table of new role names](media/whats-new/roles-table-rbac.png)
+![Table showing role names in MS Graph API and the Azure portal, and the proposed final name across API, Azure portal, and Mac.](media/whats-new/roles-table-rbac.png)
You can now allow application owners to monitor activity by the provisioning ser
Some Azure Active Directory (AD) built-in roles have names that differ from those that appear in Microsoft 365 admin center, the Azure AD portal, and Microsoft Graph. This inconsistency can cause problems in automated processes. With this update, we're renaming 10 role names to make them consistent. The following table has the new role names:
-![Table of new role names](media/whats-new/azure-role.png)
+![Table showing role names in MS Graph API and the Azure portal, and the proposed new role name in M365 Admin Center, Azure portal, and API.](media/whats-new/azure-role.png)
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/app-management-powershell-samples.md
For more information about the cmdlets used in these samples, see [Applications]
|**Application Management scripts**|| | [Export secrets and certs (app registrations)](scripts/powershell-export-all-app-registrations-secrets-and-certs.md) | Export secrets and certificates for app registrations in Azure Active Directory tenant. | | [Export secrets and certs (enterprise apps)](scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md) | Export secrets and certificates for enterprise apps in Azure Active Directory tenant. |
-| [Export expiring secrets and certs](scripts/powershell-export-apps-with-expriring-secrets.md) | Export apps with expiring secrets and certificates in Azure Active Directory tenant. |
-| [Export secrets and certs expiring beyond required date](scripts/powershell-export-apps-with-secrets-beyond-required.md) | Export apps with secrets and certificates expiring beyond the required date in Azure Active Directory tenant. |
+| [Export expiring secrets and certs](scripts/powershell-export-apps-with-expriring-secrets.md) | Export App Registrations with expiring secrets and certificates and their Owners in Azure Active Directory tenant. |
+| [Export secrets and certs expiring beyond required date](scripts/powershell-export-apps-with-secrets-beyond-required.md) | Export App Registrations with secrets and certificates expiring beyond the required date in Azure Active Directory tenant. This uses the non interactive Client_Credentials Oauth flow. |
active-directory Powershell Export All Enterprise Apps Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md
# Export secrets and certificates for enterprise apps
-This PowerShell script example exports all secrets and certificates for the specified enterprise apps from your directory into a CSV file.
+This PowerShell script example exports all secrets, certificates and owners for the specified enterprise apps from your directory into a CSV file.
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
You can modify the "$Path" variable directly in PowerShell, with a CSV file path
| Command | Notes | |||
-| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Retrieves an application from your directory. |
-| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an application from your directory. |
+| [Get-AzureADServicePrincipal](/powershell/module/azuread/Get-azureADServicePrincipal?view=azureadps-2.0&preserve-view=true) | Retrieves an enterprise application from your directory. |
+| [Get-AzureADServicePrincipalOwner](/powershell/module/azuread/Get-AzureADServicePrincipalOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an enterprise application from your directory. |
+ ## Next steps
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
# Export apps with secrets and certificates expiring beyond the required date
-This PowerShell script example exports all apps secrets and certificates expiring beyond the required date for the specified apps from your directory in a CSV file.
+This PowerShell script example exports all app registrations secrets and certificates expiring beyond a required period for the specified apps from your directory in a CSV file non-interactively.
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/az
## Script explanation
+This script is working non-interactively. The admin using it will need to change the values in the "#PARAMETERS TO CHANGE" section with their own App ID, Application Secret, Tenant Name, the period for the apps credentials expiration and the Path where the CSV will be exported.
+This script uses the [Client_Credential Oauth Flow](../../develop/v2-oauth2-client-creds-grant-flow.md)
+The function "RefreshToken" will build the access token based on the values of the parameters modified by the admin.
+ The "Add-Member" command is responsible for creating the columns in the CSV file.
-You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive.
| Command | Notes | |||
-| [Get-AzureADApplication](/powershell/module/azuread/get-azureadapplication?view=azureadps-2.0&preserve-view=true) | Retrieves an application from your directory. |
-| [Get-AzureADApplicationOwner](/powershell/module/azuread/Get-AzureADApplicationOwner?view=azureadps-2.0&preserve-view=true) | Retrieves the owners of an application from your directory. |
+| [Invoke-WebRequest](/powershell/module/azuread/Invoke-WebRequest?view=azureadps-2.0&preserve-view=true) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
## Next steps
active-directory Qs Configure Template Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md
To enable system-assigned managed identity on a VM, your account needs the [Virt
3. When you're done, the following sections should be added to the `resource` section of your template and it should resemble the following: ```JSON
- "resources": [
+ "resources": [
{ //other resource provider properties... "apiVersion": "2018-06-01",
To enable system-assigned managed identity on a VM, your account needs the [Virt
"location": "[resourceGroup().location]", "identity": { "type": "SystemAssigned",
- },
- },
-
- //The following appears only if you provisioned the optional VM extension (to be deprecated)
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(variables('vmName'),'/ManagedIdentityExtensionForWindows')]",
- "apiVersion": "2018-06-01",
- "location": "[resourceGroup().location]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.ManagedIdentity",
- "type": "ManagedIdentityExtensionForWindows",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "port": 50342
- }
- }
+ }
} ] ```
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
**Microsoft.Compute/virtualMachines API version 2018-06-01** ```JSON
- "resources": [
+ "resources": [
{ //other resource provider properties... "apiVersion": "2018-06-01",
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('<USERASSIGNEDIDENTITYNAME>'))]": {} } }
- },
- //The following appears only if you provisioned the optional VM extension (to be deprecated)
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(variables('vmName'),'/ManagedIdentityExtensionForWindows')]",
- "apiVersion": "2018-06-01-preview",
- "location": "[resourceGroup().location]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.ManagedIdentity",
- "type": "ManagedIdentityExtensionForWindows",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "port": 50342
- }
- }
}
- ]
+ ]
``` **Microsoft.Compute/virtualMachines API version 2017-12-01**
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('<USERASSIGNEDIDENTITYNAME>'))]" ] }
- },
-
- //The following appears only if you provisioned the optional VM extension (to be deprecated)
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "[concat(variables('vmName'),'/ManagedIdentityExtensionForWindows')]",
- "apiVersion": "2015-05-01-preview",
- "location": "[resourceGroup().location]",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
- ],
- "properties": {
- "publisher": "Microsoft.ManagedIdentity",
- "type": "ManagedIdentityExtensionForWindows",
- "typeHandlerVersion": "1.0",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "port": 50342
- }
- }
- }
- ]
+ }
+ ]
``` ### Remove a user-assigned managed identity from an Azure VM
To remove a user-assigned identity from a VM, your account needs the [Virtual Ma
## Next steps -- [Managed identities for Azure resources overview](overview.md).
+- [Managed identities for Azure resources overview](overview.md).
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
na Previously updated : 03/10/2020 Last updated : 03/10/2021 -+
To use this feature, you need:
- An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -- The [Microsoft Azure Add on for Splunk](https://splunkbase.splunk.com/app/3757/).
+- The [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details).
## Integrate Azure Active Directory logs
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Amazon Web Services (AWS) | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Amazon Web Services (AWS).
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with AWS Single-Account Access | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and AWS Single-Account Access.
Previously updated : 12/08/2020 Last updated : 03/05/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Amazon Web Services (AWS)
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with AWS Single-Account Access
-In this tutorial, you'll learn how to integrate Amazon Web Services (AWS) with Azure Active Directory (Azure AD). When you integrate Amazon Web Services (AWS) with Azure AD, you can:
+In this tutorial, you'll learn how to integrate AWS Single-Account Access with Azure Active Directory (Azure AD). When you integrate AWS Single-Account Access with Azure AD, you can:
-* Control in Azure AD who has access to Amazon Web Services (AWS).
-* Enable your users to be automatically signed-in to Amazon Web Services (AWS) with their Azure AD accounts.
+* Control in Azure AD who has access to AWS Single-Account Access.
+* Enable your users to be automatically signed-in to AWS Single-Account Access with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Understanding the different AWS applications in the Azure AD application gallery
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Amazon Web Services (AWS) supports **SP and IDP** initiated SSO
+* AWS Single-Account Access supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Amazon Web Services (AWS) from the gallery
+## Adding AWS Single-Account Access from the gallery
-To configure the integration of Amazon Web Services (AWS) into Azure AD, you need to add Amazon Web Services (AWS) from the gallery to your list of managed SaaS apps.
+To configure the integration of AWS Single-Account Access into Azure AD, you need to add AWS Single-Account Access from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using a work account, school account, or personal Microsoft account. 1. In the Azure portal, search for and select **Azure Active Directory**. 1. Within the Azure Active Directory overview menu, choose **Enterprise Applications** > **All applications**. 1. Select **New application** to add an application.
-1. In the **Add from the gallery** section, type **Amazon Web Services (AWS)** in the search box.
-1. Select **Amazon Web Services (AWS)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **AWS Single-Account Access** in the search box.
+1. Select **AWS Single-Account Access** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Amazon Web Services (AWS)
+## Configure and test Azure AD SSO for AWS Single-Account Access
-Configure and test Azure AD SSO with Amazon Web Services (AWS) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Amazon Web Services (AWS).
+Configure and test Azure AD SSO with AWS Single-Account Access using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS Single-Account Access.
-To configure and test Azure AD SSO with Amazon Web Services (AWS), perform the following steps:
+To configure and test Azure AD SSO with AWS Single-Account Access, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Amazon Web Services (AWS) SSO](#configure-amazon-web-services-aws-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Amazon Web Services (AWS) test user](#create-amazon-web-services-aws-test-user)** - to have a counterpart of B.Simon in Amazon Web Services (AWS) that is linked to the Azure AD representation of user.
- 1. **[How to configure role provisioning in Amazon Web Services (AWS)](#how-to-configure-role-provisioning-in-amazon-web-services-aws)**
+1. **[Configure AWS Single-Account Access SSO](#configure-aws-single-account-access-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AWS Single-Account Access test user](#create-aws-single-account-access-test-user)** - to have a counterpart of B.Simon in AWS Single-Account Access that is linked to the Azure AD representation of user.
+ 1. **[How to configure role provisioning in AWS Single-Account Access](#how-to-configure-role-provisioning-in-aws-single-account-access)**
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Amazon Web Services (AWS)** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AWS Single-Account Access** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](./media/amazon-web-service-tutorial/certificate.png)
-1. In the **Set up Amazon Web Services (AWS)** section, copy the appropriate URL(s) based on your requirement.
+1. In the **Set up AWS Single-Account Access** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Amazon Web Services (AWS).
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS Single-Account Access.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Amazon Web Services (AWS)**.
+1. In the applications list, select **AWS Single-Account Access**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Amazon Web Services (AWS) SSO
+## Configure AWS Single-Account Access SSO
1. In a different browser window, sign-on to your AWS company site as an administrator.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
c. Select **Close**.
-### How to configure role provisioning in Amazon Web Services (AWS)
+### How to configure role provisioning in AWS Single-Account Access
1. In the Azure AD management portal, in the AWS app, go to **Provisioning**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
> [!NOTE] > After you save the provisioning credentials, you must wait for the initial sync cycle to run. Sync usually takes around 40 minutes to finish. You can see the status at the bottom of the **Provisioning** page, under **Current Status**.
-### Create Amazon Web Services (AWS) test user
+### Create AWS Single-Account Access test user
-The objective of this section is to create a user called B.Simon in Amazon Web Services (AWS). Amazon Web Services (AWS) doesn't need a user to be created in their system for SSO, so you don't need to perform any action here.
+The objective of this section is to create a user called B.Simon in AWS Single-Account Access. AWS Single-Account Access doesn't need a user to be created in their system for SSO, so you don't need to perform any action here.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Amazon Web Services (AWS) Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to AWS Single-Account Access Sign on URL where you can initiate the login flow.
-* Go to Amazon Web Services (AWS) Sign-on URL directly and initiate the login flow from there.
+* Go to AWS Single-Account Access Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Amazon Web Services (AWS) for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS Single-Account Access for which you set up the SSO
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Amazon Web Services (AWS) tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Amazon Web Services (AWS) for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the AWS Single-Account Access tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS Single-Account Access for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Known issues
- * In the **Provisioning** section, the **Mappings** subsection shows a "Loading..." message, and never displays the attribute mappings. The only provisioning workflow supported today is the import of roles from AWS into Azure AD for selection during a user or group assignment. The attribute mappings for this are predetermined, and aren't configurable.
+* AWS Single-Account Access provisioning integration can be used only to connect to AWS public cloud endpoints. AWS Single-Account Access provisioning integration can't be used to access AWS Government environments.
+
+* In the **Provisioning** section, the **Mappings** subsection shows a "Loading..." message, and never displays the attribute mappings. The only provisioning workflow supported today is the import of roles from AWS into Azure AD for selection during a user or group assignment. The attribute mappings for this are predetermined, and aren't configurable.
- * The **Provisioning** section only supports entering one set of credentials for one AWS tenant at a time. All imported roles are written to the `appRoles` property of the Azure AD [`servicePrincipal` object](/graph/api/resources/serviceprincipal?view=graph-rest-beta) for the AWS tenant.
+* The **Provisioning** section only supports entering one set of credentials for one AWS tenant at a time. All imported roles are written to the `appRoles` property of the Azure AD [`servicePrincipal` object](/graph/api/resources/serviceprincipal?view=graph-rest-beta) for the AWS tenant.
- Multiple AWS tenants (represented by `servicePrincipals`) can be added to Azure AD from the gallery for provisioning. There's a known issue, however, with not being able to automatically write all of the imported roles from the multiple AWS `servicePrincipals` used for provisioning into the single `servicePrincipal` used for SSO.
+ Multiple AWS tenants (represented by `servicePrincipals`) can be added to Azure AD from the gallery for provisioning. There's a known issue, however, with not being able to automatically write all of the imported roles from the multiple AWS `servicePrincipals` used for provisioning into the single `servicePrincipal` used for SSO.
- As a workaround, you can use the [Microsoft Graph API](/graph/api/resources/serviceprincipal?view=graph-rest-beta) to extract all of the `appRoles` imported into each AWS `servicePrincipal` where provisioning is configured. You can subsequently add these role strings to the AWS `servicePrincipal` where SSO is configured.
+ As a workaround, you can use the [Microsoft Graph API](/graph/api/resources/serviceprincipal?view=graph-rest-beta) to extract all of the `appRoles` imported into each AWS `servicePrincipal` where provisioning is configured. You can subsequently add these role strings to the AWS `servicePrincipal` where SSO is configured.
* Roles must meet the following requirements to be eligible to be imported from AWS into Azure AD:
You can also use Microsoft Access Panel to test the application in any mode. Whe
## Next steps
-Once you configure Amazon Web Services (AWS) you can enforce Session Control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+Once you configure AWS Single-Account Access you can enforce Session Control, which protects exfiltration and infiltration of your organization's sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
[11]: ./media/amazon-web-service-tutorial/ic795031.png
active-directory Bomgarremotesupport Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bomgarremotesupport-tutorial.md
Previously updated : 11/12/2020 Last updated : 03/03/2021
To configure the integration of BeyondTrust Remote Support into Azure AD, you ne
1. In the **Add from the gallery** section, type **BeyondTrust Remote Support** in the search box. 1. Select **BeyondTrust Remote Support** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for BeyondTrust Remote Support
+## Configure and test Azure AD SSO for BeyondTrust Remote Support
Configure and test Azure AD SSO with BeyondTrust Remote Support using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BeyondTrust Remote Support.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<HOSTNAME>.bomgar.com/saml`
-
- b. In the **Identifier** box, type a URL using the following pattern:
+ a. In the **Identifier** box, type a URL using the following pattern:
`https://<HOSTNAME>.bomgar.com`
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://<HOSTNAME>.bomgar.com/saml/sso`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<HOSTNAME>.bomgar.com/saml`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. You will get these values explained later in the tutorial.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-On URL. You will get these values explained later in the tutorial.
1. BeyondTrust Remote Support application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create BeyondTrust Remote Support test user
+In this section, a user called Britta Simon is created in BeyondTrust Remote Support. BeyondTrust Remote Support supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in BeyondTrust Remote Support, a new one is created after authentication.
+
+Follow the below procedure, which is mandatory for configuring the BeyondTrust Remote Support.
+ We will be configuring the User Provision Settings here. The values used in this section will be referenced from the **User Attributes & Claims** section in the Azure portal. We configured this to be the default values which are already imported at the time of creation, however, the value can be customized if necessary. ![Screenshot shows the User Provision Settings where you can configure user values.](./media/bomgarremotesupport-tutorial/user-attribute.png)
We will be configuring the User Provision Settings here. The values used in this
> The groups and e-mail attribute are not necessary for this implementation. If utilizing Azure AD groups and assigning them to BeyondTrust Remote Support Group Policies for permissions, the Object ID of the group will need to be referenced via its properties in the Azure portal and placed in the ΓÇÿAvailable GroupsΓÇÖ section. Once this has been completed, the Object ID/AD Group will now be available for assignment to a group policy for permissions.
-![Screenshot shows the I T section with Membership type, Source, Type, and Object I D.](./media/bomgarremotesupport-tutorial/config-user2.png)
+![Screenshot shows the I T section with Membership type, Source, Type, and Object I D.](./media/bomgarremotesupport-tutorial/config-user-2.png)
![Screenshot shows the Basic Settings page for a group policy.](./media/bomgarremotesupport-tutorial/group-policy.png)
active-directory Getabstract Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/getabstract-provisioning-tutorial.md
Title: 'Tutorial: Configure getAbstract for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to getAbstract.
+description: Learn how to automatically provision and deprovision user accounts from Azure Active Directory to getAbstract.
documentationcenter: ''
# Tutorial: Configure getAbstract for automatic user provisioning
-This tutorial describes the steps you need to perform in both getAbstract and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [getAbstract](https://www.getabstract.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both getAbstract and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [getAbstract](https://www.getabstract.com) by using the Azure AD provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software as a service (SaaS) applications with Azure AD](../app-provisioning/user-provisioning.md).
+## Capabilities supported
-## Capabilities Supported
> [!div class="checklist"] > * Create users in getAbstract.
-> * Remove users in getAbstract when they do not require access anymore.
+> * Remove users in getAbstract when they don't require access anymore.
> * Keep user attributes synchronized between Azure AD and getAbstract. > * Provision groups and group memberships in getAbstract.
-> * [Single sign-on](getabstract-tutorial.md) to getAbstract (recommended)
+> * Enable [single sign-on (SSO)](getabstract-tutorial.md) to getAbstract (recommended).
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A getAbstract tenant (getAbstract Corporate license).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning. Examples are Application Administrator, Cloud Application Administrator, Application Owner, or Global Administrator.
+* A getAbstract tenant (getAbstract corporate license).
* SSO enabled on Azure AD tenant and getAbstract tenant.
-* Approval and SCIM enabling for getAbstract (send email to b2b.itsupport@getabstract.com).
+* Approval and System for Cross-domain Identity Management (SCIM) enabling for getAbstract. (Send email to b2b.itsupport@getabstract.com.)
## Step 1. Plan your provisioning deployment+ 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and getAbstract](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and getAbstract](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure getAbstract to support provisioning with Azure AD
-1. Sign into getAbstract
-2. Click on the settings icon located in the upper right hand corner and click on the **My Central Admin** option,
-
- ![getAbstract My Central Admin](media/getabstract-provisioning-tutorial/my-account.png)
-3. Locate and click on the **SCIM Admin** option
-
- ![getAbstract SCIM Admin](media/getabstract-provisioning-tutorial/scim-admin.png)
+1. Sign in to getAbstract.
+1. Select the settings icon located in the upper-right corner, and select the **My Central Admin** option.
-4. Click on the **Go** button
+ ![Screenshot that shows the getAbstract My Central Admin.](media/getabstract-provisioning-tutorial/my-account.png)
- ![getAbstract SCIM Client Id](media/getabstract-provisioning-tutorial/scim-client-go.png)
+1. Locate and select the **SCIM Admin** option.
-5. Click on the **Generate new token**
+ ![Screenshot that shows the getAbstract SCIM Admin.](media/getabstract-provisioning-tutorial/scim-admin.png)
- ![getAbstract SCIM Token 1](media/getabstract-provisioning-tutorial/scim-generate-token-step-2.png)
+1. Select **Go**.
-6. If you are sure then click on the **Generate new token** button. Otherwise, click on **Cancel** button
+ ![Screenshot that shows the getAbstract SCIM Client Id.](media/getabstract-provisioning-tutorial/scim-client-go.png)
- ![getAbstract SCIM Token 2](media/getabstract-provisioning-tutorial/scim-generate-token-step-1.png)
+1. Select **Generate new token**.
-7. Lastly, you can either click on the copy to clipboard icon or select the whole token and copy it. Also make a note that Tenant/Base URL is `https://www.getabstract.com/api/scim/v2`. These values will be entered in the **Secret Token** * and **Tenant URL** * field in the Provisioning tab of your getAbstract's application in the Azure portal.
+ ![Screenshot that shows the getAbstract SCIM Token 1.](media/getabstract-provisioning-tutorial/scim-generate-token-step-2.png)
- ![getAbstract SCIM Token 3](media/getabstract-provisioning-tutorial/scim-generate-token-step-3.png)
+1. If you're sure, select **Generate new token**. Otherwise, select **Cancel**.
+ ![Screenshot that shows the getAbstract SCIM Token 2.](media/getabstract-provisioning-tutorial/scim-generate-token-step-1.png)
-## Step 3. Add getAbstract from the Azure AD application gallery
+1. Either select the copy-to-clipboard icon or select the whole token and copy it. Also make a note that the Tenant/Base URL is `https://www.getabstract.com/api/scim/v2`. These values will be entered in the **Secret Token** and **Tenant URL** boxes on the **Provisioning** tab of your getAbstract application in the Azure portal.
+
+ ![Screenshot that shows the getAbstract SCIM Token 3.](media/getabstract-provisioning-tutorial/scim-generate-token-step-3.png)
-Add getAbstract from the Azure AD application gallery to start managing provisioning to getAbstract. If you have previously setup getAbstract for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+## Step 3. Add getAbstract from the Azure AD application gallery
-## Step 4. Define who will be in scope for provisioning
+Add getAbstract from the Azure AD application gallery to start managing provisioning to getAbstract. If you've previously set up getAbstract for SSO, you can use the same application. We recommend that you create a separate app when you test out the integration initially. To learn more about how to add an application from the gallery, see [this quickstart](../manage-apps/add-application-portal.md).
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+## Step 4. Define who will be in scope for provisioning
-* When assigning users and groups to getAbstract, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+You can use the Azure AD provisioning service to scope who will be provisioned based on assignment to the application or based on attributes of the user or group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described in [Provision apps with scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* When you assign users and groups to getAbstract, you must select a role other than **Default Access**. Users with the default access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+* Start small. Test with a small set of users and groups before you roll out to everyone. When scope for provisioning is set to assigned users and groups, you can control this option by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute-based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-## Step 5. Configure automatic user provisioning to getAbstract
+## Step 5. Configure automatic user provisioning to getAbstract
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users or groups in TestApp based on user or group assignments in Azure AD.
-### To configure automatic user provisioning for getAbstract in Azure AD:
+### Configure automatic user provisioning for getAbstract in Azure AD
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot that shows the Enterprise applications pane.](common/enterprise-applications.png)
-2. In the applications list, select **getAbstract**.
+1. In the list of applications, select **getAbstract**.
- ![The getAbstract link in the Applications list](common/all-applications.png)
+ ![Screenshot that shows the getAbstract link in the list of applications.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
- ![Provisioning tab](common/provisioning.png)
+ ![Screenshot that shows the Provisioning tab.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set **Provisioning Mode** to **Automatic**.
- ![Provisioning tab automatic](common/provisioning-automatic.png)
+ ![Screenshot that shows Provisioning Mode set to Automatic.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your getAbstract Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to getAbstract. If the connection fails, ensure your getAbstract account has Admin permissions and try again.
+1. In the **Admin Credentials** section, enter your getAbstract **Tenant URL** and **Secret token** information. Select **Test Connection** to ensure that Azure AD can connect to getAbstract. If the connection fails, ensure that your getAbstract account has admin permissions and try again.
- ![Token](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot that shows the Tenant URL and Secret Token boxes.](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** box, enter the email address of a person or group who should receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
- ![Notification Email](common/provisioning-notification-email.png)
+ ![Screenshot that shows the Notification Email box.](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to getAbstract**.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to getAbstract**.
-9. Review the user attributes that are synchronized from Azure AD to getAbstract in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in getAbstract for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the getAbstract API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to getAbstract in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in getAbstract for update operations. If you change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the getAbstract API supports filtering users based on that attribute. Select **Save** to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|externalId|String| |preferredLanguage|String|
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to getAbstract**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to getAbstract**.
-11. Review the group attributes that are synchronized from Azure AD to getAbstract in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in getAbstract for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to getAbstract in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in getAbstract for update operations. Select **Save** to commit any changes.
|Attribute|Type|Supported for filtering| |||| |displayName|String|&check;| |externalId|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for getAbstract, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To configure scoping filters, see the instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for getAbstract, change **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot that shows the Provisioning Status toggled On.](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to getAbstract by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users or groups that you want to provision to getAbstract by selecting the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot that shows the Provisioning Scope.](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+1. When you're ready to provision, select **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot that shows the Save button.](common/provisioning-configuration-save.png)
-This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur about every 40 minutes as long as the Azure AD provisioning service is running.
## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
-* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+After you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users were provisioned successfully or unsuccessfully.
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion.
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. To learn more about quarantine states, see [Application provisioning status of quarantine](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Github Ae Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-ae-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click on **Select groups** and search for the **Group** you want to include this claim, where its members should be administrators for GHAE.
-1. Select **Attribute** for **Source** and enter **true** for the **Value**.
+1. Select **Attribute** for **Source** and enter **true** (without quotes) for the **Value**.
-10. Click **Save**.
+1. Click **Save**.
![manage claim](./media/github-ae-tutorial/administrator.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificateBase64.png)
+ ![The Certificate download link](common/certificatebase64.png)
1. On the **Set up GitHub AE** section, copy the appropriate URL(s) based on your requirement.
You can also use Microsoft Access Panel to test the application in any mode. Whe
* [Configuring user provisioning for your enterprise](https://docs.github.com/github-ae@latest/admin/authentication/configuring-user-provisioning-for-your-enterprise).
-* Once you configure GitHub AE you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* Once you configure GitHub AE you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Sharingcloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sharingcloud-tutorial.md
Previously updated : 02/09/2021 Last updated : 03/10/2021
In this tutorial, you'll learn how to integrate SharingCloud with Azure Active D
* Enable your users to be automatically signed-in to SharingCloud with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-If you want to learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
-* SharingCloud single sign-on (SSO) enabled subscription.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Sapient single sign-on (SSO) enabled subscription.
+ ## Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of SharingCloud into Azure AD, you need to add SharingCloud from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service.-
- ![The Azure Active Directory button](common/select-azuread.png)
-
1. Navigate to **Enterprise Applications** and then select **All Applications**.-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
1. To add new application, select **New application**.-
- ![The New application button](common/add-new-app.png)
-
1. In the **Add from the gallery** section, type **SharingCloud** in the search box.-
- ![SharingCloud in the results list](common/search-new-app.png)
-
1. Select **SharingCloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for SharingCloud
+
+## Configure and test Azure AD SSO for SharingCloud
Configure and test Azure AD SSO with SharingCloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SharingCloud.
-To configure and test Azure AD SSO with SharingCloud, complete the following building blocks:
+To configure and test Azure AD SSO with SharingCloud, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with SharingCloud, complete the following bui
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **SharingCloud** application integration page, find the **Manage** section and select **single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
+1. In the Azure portal, on the **SharingCloud** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-1. On the **Set up single sign-on with SAML** page, click the **Edit** icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
Upload the metadata file with XML file provided by SharingCloud. Contact the [SharingCloud Client support team](mailto:support@sharingcloud.com) to get the file.
- ![image](common/upload-metadata.png)
+ ![Screenshot of the Basic SAML Configuration user interface with the **Upload metadata file** link highlighted.](common/upload-metadata.png)
Select the metadata file provided and click on **Upload**.
- ![image](common/browse-upload-metadata.png)
+ ![Screenshot of the metadata file provided user interface, with the select file icon and **Upload** button highlighted.](common/browse-upload-metadata.png)
1. SharingCloud application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/edit_attribute.png)
+ ![Screenshot of the User Attributes user interface with the edit icon highlighted.](common/edit_attribute.png)
1. In addition to above, SharingCloud application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Metadata Url to copy](common/copy_metadataurl.png)
-## Configure SharingCloud SSO
-
-To configure single sign-on on **SharingCloud** side, you need to send the copied **Federation Metadata Url** from Azure portal to [SharingCloud support team](mailto:support@sharingcloud.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **SharingCloud**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
+## Configure SharingCloud SSO
+
+To configure single sign-on on **SharingCloud** side, you need to send the copied **Federation Metadata Url** from Azure portal to [SharingCloud support team](mailto:support@sharingcloud.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ### Create SharingCloud test user In this section, a user called Britta Simon is created in SharingCloud. SharingCloud supports Just In Time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in SharingCloud, a new one is created after authentication. ## Test SSO
-* Go to your SharingCloud URL directly and initiate the login flow from there.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to SharingCloud Sign on URL where you can initiate the login flow.
+
+* Go to SharingCloud Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SharingCloud for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the SharingCloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SharingCloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+ ## Next steps
active-directory Security Info Setup Auth App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/security-info-setup-auth-app.md
Security info methods are used for both two-factor security verification and for
Depending on your organizationΓÇÖs settings, you might be able to use an authentication app as one of your security info methods. You aren't required to use the Microsoft Authenticator app, and you can choose a different app during the set up process. However, this article uses the Microsoft Authenticator app.
->[!Important]
-> If you have setup Microsoft Authenticator app on 5 different devices or 5 hardware tokens, you would not be able to setup a sixth one and may see the following error message.
+> [!IMPORTANT]
+> If you have set up the Microsoft Authenticator app on five different devices or if you've used five hardware tokens, you won't be able to set up a sixth one, and you might see the following error message:
>
-> **You can't setup Microsoft Authenticator because you already have five authenticator apps or hardware tokens. Please contact your administrator to delete one of your authenticator apps or hardware tokens.**
+> **You can't set up Microsoft Authenticator because you already have five authenticator apps or hardware tokens. Please contact your administrator to delete one of your authenticator apps or hardware tokens.**
### To set up the Microsoft Authenticator app
Depending on your organizationΓÇÖs settings, you might be able to use an authent
![My Profile page, showing highlighted Security info links](media/security-info/securityinfo-myprofile-page.png)
-2. Select **Security info** from the left navigation pane or from the link in the **Security info** block, and then select **Add method** from the **Security info** page.
+2. Select **Security info** in the left menu or by using the link in the **Security info** pane. If you have already registered, you'll be prompted for two-factor verification. Then, select **Add method** in the **Security info** pane.
![Security info page with highlighted Add method option](media/security-info/securityinfo-myprofile-addmethod-page.png)
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/aks-migration.md
Stateless application migration is the most straightforward case. Apply your res
Carefully plan your migration of stateful applications to avoid data loss or unexpected downtime. If you use Azure Files, you can mount the file share as a volume into the new cluster:
-* [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-the-file-share-as-a-volume)
+* [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-file-share-as-an-persistent-volume)
If you use Azure Managed Disks, you can only mount the disk if unattached to any VM: * [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-volume)
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-volume.md
Use the `kubectl create secret` command to create the secret. The following exam
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY ```
-## Mount the file share as a volume
+## Mount file share as an inline volume
+> Note: starting from 1.18.15, 1.19.7, 1.20.2, 1.21.0, secret namespace in inline `azureFile` volume can only be set as `default` namespace, to specify a different secret namespace, please use below persistent volume example instead.
To mount the Azure Files share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the Files share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
Volumes:
[...] ```
-## Mount options
+## Mount file share as an persistent volume
+ - Mount options
-The default value for *fileMode* and *dirMode* is *0755* for Kubernetes version 1.9.1 and above. If using a cluster with Kubernetes version 1.8.5 or greater and statically creating the persistent volume object, mount options need to be specified on the *PersistentVolume* object. The following example sets *0777*:
+The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.15 and above. The following example sets *0755* on the *PersistentVolume* object:
```yaml apiVersion: v1
spec:
- ReadWriteMany azureFile: secretName: azure-secret
+ secretNamespace: default
shareName: aksshare readOnly: false mountOptions:
- - dir_mode=0777
- - file_mode=0777
+ - dir_mode=0755
+ - file_mode=0755
- uid=1000 - gid=1000 - mfsymlinks - nobrl ```
-If using a cluster of version 1.8.0 - 1.8.4, a security context can be specified with the *runAsUser* value set to *0*. For more information on Pod security context, see [Configure a Security Context][kubernetes-security-context].
- To update your mount options, create a *azurefile-mount-options-pv.yaml* file with a *PersistentVolume*. For example: ```yaml
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
AKS generates and uses the following certificates, Certificate Authorities, and
* The AKS API server creates a Certificate Authority (CA) called the Cluster CA. * The API server has a Cluster CA, which signs certificates for one-way communication from the API server to kubelets. * Each kubelet also creates a Certificate Signing Request (CSR), which is signed by the Cluster CA, for communication from the kubelet to the API server.
-* The etcd key value store has a certificate signed by the Cluster CA for communication from etcd to the API server.
-* The etcd key value store creates a CA that signs certificates to authenticate and authorize data replication between etcd replicas in the AKS cluster.
* The API aggregator uses the Cluster CA to issue certificates for communication with other APIs. The API aggregator can also have its own CA for issuing those certificates, but it currently uses the Cluster CA. * Each node uses a Service Account (SA) token, which is signed by the Cluster CA. * The `kubectl` client has a certificate for communicating with the AKS cluster.
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
<!-- LINKS - external --> [aks-engine]: https://github.com/Azure/aks-engine [kubectl-overview]: https://kubernetes.io/docs/user-guide/kubectl-overview/
-[compliance-doc]: https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942
+[compliance-doc]: https://azure.microsoft.com/en-us/overview/trusted-cloud/compliance/
<!-- LINKS - internal --> [acr-docs]: ../container-registry/container-registry-intro.md
Learn more about deploying and managing AKS with the Azure CLI Quickstart.
[kubernetes-rbac]: concepts-identity.md#kubernetes-role-based-access-control-kubernetes-rbac [concepts-identity]: concepts-identity.md [concepts-storage]: concepts-storage.md
-[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
+[conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
+
+ Title: Use Planned Maintenance for your Azure Kubernetes Service (AKS) cluster (preview)
+
+description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS).
++ Last updated : 03/03/2021+++++
+# Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)
+
+Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
+
+## Before you begin
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
++
+### Limitations
+
+When using Planned Maintenance, the following restrictions apply:
+
+- AKS reserves the right to break these windows for fixes and patches that are urgent or critical.
+- Performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.
+- Updates cannot be blocked for more than seven days.
+
+### Install aks-preview CLI extension
+
+You also need the *aks-preview* Azure CLI extension version 0.5.4 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+## Allow maintenance on every Monday at 1:00am to 2:00am
+
+To add a maintenance window, you can use the `az aks maintenanceconfiguration add` command.
+
+> [!IMPORTANT]
+> Planned Maintenance windows are specified in Coordinated Universal Time (UTC).
+
+```azurecli-interactive
+az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday --start-hour 1
+```
+
+The following example output shows the maintenance window from 1:00am to 2:00am every Monday.
+
+```json
+{- Finished ..
+ "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/default",
+ "name": "default",
+ "notAllowedTime": null,
+ "resourceGroup": "MyResourceGroup",
+ "systemData": null,
+ "timeInWeek": [
+ {
+ "day": "Monday",
+ "hourSlots": [
+ 1
+ ]
+ }
+ ],
+ "type": null
+}
+```
+
+To allow maintenance any time during a day, omit the *start-hour* parameter. For example, the following command sets the maintenance window for the full day every Monday:
+
+```azurecli-interactive
+az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday
+```
+
+## Add a maintenance configuration with a JSON file
+
+You can also use a JSON file create a maintenance window instead of using parameters. Create a `test.json` file with the following contents:
+
+```json
+ {
+ "timeInWeek": [
+ {
+ "day": "Tuesday",
+ "hour_slots": [
+ 1,
+ 2
+ ]
+ },
+ {
+ "day": "Wednesday",
+ "hour_slots": [
+ 1,
+ 6
+ ]
+ }
+ ],
+ "notAllowedTime": [
+ {
+ "start": "2021-05-26T03:00:00Z",
+ "end": "2021-05-30T012:00:00Z"
+ }
+ ]
+}
+```
+
+The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am. There is also an exception from *2021-05-26T03:00:00Z* to *2021-05-30T12:00:00Z* where maintenance isn't allowed even if it overlaps with a maintenance window. The following command adds the maintenance windows from `test.json`.
+
+```azurecli-interactive
+az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --config-file ./test.json
+```
+
+## Update an existing maintenance window
+
+To update an existing maintenance configuration, use the `az aks maintenanceconfiguration update` command.
+
+```azurecli-interactive
+az aks maintenanceconfiguration update -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday --start-hour 1
+```
+
+## List all maintenance windows in an existing cluster
+
+To see all current maintenance configuration windows in your AKS Cluster, use the `az aks maintenanceconfiguration list` command.
+
+```azurecli-interactive
+az aks maintenanceconfiguration list -g MyResourceGroup --cluster-name myAKSCluster
+```
+
+In the output below, you can see that there are two maintenance windows configured for myAKSCluster. One window is on Mondays at 1:00am and another window is on Friday at 4:00am.
+
+```json
+[
+ {
+ "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/default",
+ "name": "default",
+ "notAllowedTime": null,
+ "resourceGroup": "MyResourceGroup",
+ "systemData": null,
+ "timeInWeek": [
+ {
+ "day": "Monday",
+ "hourSlots": [
+ 1
+ ]
+ }
+ ],
+ "type": null
+ },
+ {
+ "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/testConfiguration",
+ "name": "testConfiguration",
+ "notAllowedTime": null,
+ "resourceGroup": "MyResourceGroup",
+ "systemData": null,
+ "timeInWeek": [
+ {
+ "day": "Friday",
+ "hourSlots": [
+ 4
+ ]
+ }
+ ],
+ "type": null
+ }
+]
+```
+
+## Show a specific maintenance configuration window in an AKS cluster
+
+To see a specific maintenance configuration window in your AKS Cluster, use the `az aks maintenanceconfiguration show` command.
+
+```azurecli-interactive
+az aks maintenanceconfiguration show -g MyResourceGroup --cluster-name myAKSCluster --name default
+```
+
+The following example output shows the maintenance window for *default*:
+
+```json
+{
+ "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/default",
+ "name": "default",
+ "notAllowedTime": null,
+ "resourceGroup": "MyResourceGroup",
+ "systemData": null,
+ "timeInWeek": [
+ {
+ "day": "Monday",
+ "hourSlots": [
+ 1
+ ]
+ }
+ ],
+ "type": null
+}
+```
+
+## Delete a certain maintenance configuration window in an existing AKS Cluster
+
+To delete a certain maintenance configuration window in your AKS Cluster, use the `az aks maintenanceconfiguration delete` command.
+
+```azurecli-interactive
+az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCluster --name default
+```
+
+## Next steps
+
+- To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
++
+<!-- LINKS - Internal -->
+[aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-support-policies]: support-policies.md
+[aks-faq]: faq.md
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest#az-aks-install-cli&preserve-view=true
+[az-provider-register]: /cli/azure/provider?view=azure-cli-latest#az-provider-register
+[aks-upgrade]: upgrade-cluster.md
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-authentication-authorization.md
# Authentication and authorization in Azure App Service and Azure Functions
-Azure App Service provides built-in authentication and authorization support, so you can sign in users and access data by writing minimal or no code in your web app, RESTful API, and mobile back end, and also [Azure Functions](../azure-functions/functions-overview.md). This article describes how App Service helps simplify authentication and authorization for your app.
+Azure App Service provides built-in authentication and authorization support (sometimes referred to as "Easy Auth"), so you can sign in users and access data by writing minimal or no code in your web app, RESTful API, and mobile back end, and also [Azure Functions](../azure-functions/functions-overview.md). This article describes how App Service helps simplify authentication and authorization for your app.
Secure authentication and authorization require deep understanding of security, including federation, encryption, [JSON web tokens (JWT)](https://wikipedia.org/wiki/JSON_Web_Token) management, [grant types](https://oauth.net/2/grant-types/), and so on. App Service provides these utilities so that you can spend more time and energy on providing business value to your customer.
Secure authentication and authorization require deep understanding of security,
> The ASP.NET Core 2.1 and above versions hosted by App Service are already patched for this breaking change and handle Chrome 80 and older browsers appropriately. In addition, the same patch for ASP.NET Framework 4.7.2 has been deployed on the App Service instances throughout January 2020. For more information, see [Azure App Service SameSite cookie update](https://azure.microsoft.com/updates/app-service-samesite-cookie-update/). >
-> [!NOTE]
-> The Authentication/Authorization feature is also sometimes referred to as "Easy Auth".
- > [!NOTE] > Enabling this feature will cause **all** non-secure HTTP requests to your application to be automatically redirected to HTTPS, regardless of the App Service configuration setting to [enforce HTTPS](configure-ssl-bindings.md#enforce-https). If needed, you can disable this via the `requireHttps` setting in the [auth settings configuration file](app-service-authentication-how-to.md#configuration-file-reference), but you must then take care to ensure no security tokens ever get transmitted over non-secure HTTP connections.
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
app-service Webjobs Sdk How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/webjobs-sdk-how-to.md
static async Task Main()
} ```
-For more details, see the [Event Hubs binding](../azure-functions/functions-bindings-event-hubs-trigger.md#host-json) article.
+For more details, see the [Event Hubs binding](../azure-functions/functions-bindings-event-hubs.md#host-json) article.
### Queue storage trigger configuration
application-gateway Application Gateway Faq Md https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-faq-md.md
Usually, you see an unknown status when access to the backend is blocked by a ne
Due to current platform limitations, if you have an NSG on the Application Gateway v2 (Standard_v2, WAF_v2) subnet and if you have enabled NSG flow logs on it, you will see nondeterministic behavior and this scenario is currently not supported.
-### Does Application Gateway store customer data?
+### Where does Application Gateway store customer data?
-No, Application Gateway does not store customer data.
+Application Gateway does not move or store customer data out of the region it's deployed in.
## Next steps
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/overview.md
Azure Attestation is the preferred choice for attesting TEEs as it offers the fo
Clusters deployed in two regions will operate independently under normal circumstances. In the case of a fault or outage of one region, the following takes place: - Azure Attestation BCDR will provide seamless failover in which customers do not need to take any extra step to recover-- The [Azure Traffic Manager](../traffic-manager/index.yml) for the region will detect the health probe is degraded and switch the endpoint to paired region
+- The [Azure Traffic Manager](../traffic-manager/index.yml) for the region will detect that the health probe is degraded and switches the endpoint to paired region
- Existing connections will not work and will receive internal server error or timeout issues-- All control plane operations will be blocked. Customers will not be able to create attestation providers and update policies in the primary region-- All data plane operations, including attest calls, will continue to work in primary region
+- All control plane operations will be blocked. Customers will not be able to create attestation providers in the primary region
+- All data plane operations, including attest calls and policy configuration, will be served by secondary region. Customers can continue to work on data plane operations with the original URI corresponding to primary region
## Next steps - Learn about [Azure Attestation basic concepts](basic-concepts.md)
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-managing-data.md
Title: Azure Automation data security
description: This article helps you learn how Azure Automation protects your privacy and secures your data. Previously updated : 03/02/2021 Last updated : 03/10/2021 + # Management of Azure Automation data This article contains several topics explaining how data is protected and secured in an Azure Automation environment.
To insure the security of data in transit to Azure Automation, we strongly encou
* Webhook calls
-* Hybrid Runbook Workers, which includes machines managed by Update Management and Change Tracking and Inventory.
+* Hybrid Runbook Workers, which include machines managed by Update Management and Change Tracking and Inventory.
* DSC nodes
-Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. We do not recommend explicitly setting your agent to only use TLS 1.2 unless absolutely necessary, as it can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3.
+Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. We do not recommend explicitly setting your agent to only use TLS 1.2 unless its necessary, as it can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3.
For information about TLS 1.2 support with the Log Analytics agent for Windows and Linux, which is a dependency for the Hybrid Runbook Worker role, see [Log Analytics agent overview - TLS 1.2](..//azure-monitor/agents/log-analytics-agent.md#tls-12-protocol).
For information about TLS 1.2 support with the Log Analytics agent for Windows a
## Data retention
-When you delete a resource in Azure Automation, it's retained for a number of days for auditing purposes before permanent removal. You can't see or use the resource during this time. This policy also applies to resources that belong to a deleted Automation account. The retention policy applies to all users and currently can't be customized. However, if you need to keep data for a longer period, you can [forward Azure Automation job data to Azure Monitor logs](automation-manage-send-joblogs-log-analytics.md).
+When you delete a resource in Azure Automation, it's retained for many days for auditing purposes before permanent removal. You can't see or use the resource during this time. This policy also applies to resources that belong to a deleted Automation account. The retention policy applies to all users and currently can't be customized. However, if you need to keep data for a longer period, you can [forward Azure Automation job data to Azure Monitor logs](automation-manage-send-joblogs-log-analytics.md).
The following table summarizes the retention policy for different resources.
You can export your runbooks to script files using either the Azure portal or th
### Integration modules
-You can't export integration modules from Azure Automation. You must make them available outside the Automation account.
+You can't export integration modules from Azure Automation, they have to be made available outside of the Automation account.
### Assets
You can export your DSC configurations to script files using either the Azure po
Geo-replication is standard in Azure Automation accounts. You choose a primary region when setting up your account. The internal Automation geo-replication service assigns a secondary region to the account automatically. The service then continuously backs up account data from the primary region to the secondary region. The full list of primary and secondary regions can be found at [Business continuity and disaster recovery (BCDR): Azure Paired Regions](../best-practices-availability-paired-regions.md).
-The backup created by the Automation geo-replication service is a complete copy of Automation assets, configurations, and the like. This backup can be used if the primary region goes down and loses data. In the unlikely event that data for a primary region is lost, Microsoft attempts to recover it. If the company can't recover the primary data, it uses automatic failover and informs you of the situation through your Azure subscription.
+The backup created by the Automation geo-replication service is a complete copy of Automation assets, configurations, and the like. This backup can be used if the primary region goes down and loses data. In the unlikely event that data for a primary region is lost, Microsoft attempts to recover it.
+
+> [!NOTE]
+> Azure Automation stores customer data in the region selected by the customer. For the purpose of BCDR, for all regions except Brazil South and Southeast Asia, Azure Automation data is stored in a different region (Azure paired region). Only for the Brazil South (Sao Paulo State) region of Brazil geography and Southeast Asia region (Singapore) of the Asia Pacific geography, we store Azure Automation data in the same region to accommodate data-residency requirements for these regions.
The Automation geo-replication service isn't accessible directly to external customers if there is a regional failure. If you want to maintain Automation configuration and runbooks during regional failures:
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/source-control-integration.md
Title: Use source control integration in Azure Automation
description: This article tells how to synchronize Azure Automation source control with other repositories. Previously updated : 11/12/2020 Last updated : 03/10/2021
Azure Automation supports three types of source control:
* A source control repository (GitHub or Azure Repos) * A [Run As account](automation-security-overview.md#run-as-accounts)
-* The [latest Azure modules](automation-update-azure-modules.md) in your Automation account, including the `Az.Accounts` module (Az module equivalent of `AzureRM.Profile`)
+* The [`AzureRM.Profile` module](/powershell/module/azurerm.profile/) must be imported into your Automation account. Note that the equivalent Az module (`Az.Accounts`) will not work with Automation source control.
> [!NOTE] > Source control synchronization jobs are run under the user's Automation account and are billed at the same rate as other Automation jobs.
azure-app-configuration Enable Dynamic Configuration Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-app.md
Title: Use dynamic configuration in a Spring Boot app
description: Learn how to dynamically update configuration data for Spring Boot apps -+ Previously updated : 08/06/2020 Last updated : 12/09/2020 -+ #Customer intent: As a Java Spring developer, I want to dynamically update my app to use the latest configuration data in App Configuration. # Tutorial: Use dynamic configuration in a Java Spring app
-The App Configuration Spring Boot client library supports updating a set of configuration settings on demand, without causing an application to restart. The client library caches each setting to avoid too many calls to the configuration store. The refresh operation doesn't update the value until the cached value has expired, even when the value has changed in the configuration store. The default expiration time for each request is 30 seconds. It can be overridden if necessary.
+App Configuration has two libraries for Spring. `spring-cloud-azure-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`. `spring-cloud-azure-appconfiguration-config-web` requires Spring Web along with Spring Boot. Both libraries support manual triggering to check for refreshed configuration values. `spring-cloud-azure-appconfiguration-config-web` also adds support for automatic checking of configuration refresh.
-You can check for updated settings on demand by calling `AppConfigurationRefresh`'s `refreshConfigurations()` method.
+Refresh allows you to refresh your configuration values without having to restart your application, though it will cause all beans in the `@RefreshScope` to be recreated. The client library caches a hash id of the currently loaded configurations to avoid too many calls to the configuration store. The refresh operation doesn't update the value until the cached value has expired, even when the value has changed in the configuration store. The default expiration time for each request is 30 seconds. It can be overridden if necessary.
-Alternatively, you can use the `spring-cloud-azure-appconfiguration-config-web` package, which takes a dependency on `spring-web` to handle automated refresh.
+`spring-cloud-azure-appconfiguration-config-web`'s automated refresh is triggered based off activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `spring-cloud-azure-appconfiguration-config-web`'s automated refresh will not trigger a refresh even if the cache expiration time has expired.
+
+## Use manual refresh
+
+App Configuration exposes `AppConfigurationRefresh` which can be used to check if the cache is expired and if it is expired trigger a refresh.
+
+```java
+import com.microsoft.azure.spring.cloud.config.AppConfigurationRefresh;
+
+...
+
+@Autowired
+private AppConfigurationRefresh appConfigurationRefresh;
+
+...
+
+public void myConfigurationRefreshCheck() {
+ Future<Boolean> triggeredRefresh = appConfigurationRefresh.refreshConfigurations();
+}
+```
+
+`AppConfigurationRefresh`'s `refreshConfigurations()` returns a `Future` that is true if a refresh has been triggered, and false if not. False means either the cache expiration time hasn't expired, there was no change, or another thread is currently checking for a refresh.
## Use automated refresh
Then, open the *pom.xml* file in a text editor, and add a `<dependency>` for `sp
mvn spring-boot:run ```
-1. Open a browser window, and go to the URL: `http://localhost:8080`. You will see the message associated with your key.
+1. Open a browser window, and go to the URL: `http://localhost:8080`. You will see the message associated with your key.
You can also use *curl* to test your application, for example:
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-functions Dotnet Isolated Process Developer Howtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-developer-howtos.md
At this point, you can run the `func start` command from the root of your projec
1. In the Azure Functions runtime output, make a note of the process ID of the host process, to which you'll attach a debugger. Also note the URL of your local function.
-1. From the **Debug** menu in Visual Studio, select **Attach to Process...**, locate the dotnet.exe process that matches the process ID, and select **Attach**.
+1. From the **Debug** menu in Visual Studio, select **Attach to Process...**, locate the process that matches the process ID, and select **Attach**.
:::image type="content" source="media/dotnet-isolated-process-developer-howtos/attach-to-process.png" alt-text="Attach the debugger to the Functions host process":::
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
A `HostBuilder` is used to build and return a fully initialized `IHost` instance
Having access to the host builder pipeline means that you can set any app-specific configurations during initialization. These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you'll still need to use the [host.json file](functions-host-json.md).
-The following example shows how to add configuration `args`, which are read as command-line arguments:
+<!--The following example shows how to add configuration `args`, which are read as command-line arguments:
+ .ConfigureAppConfiguration(c =>
+ {
+ c.AddCommandLine(args);
+ })
+ :::
-The `ConfigureAppConfiguration` method is used to configure the rest of the build process and application. This example also uses an [IConfigurationBuilder](/dotnet/api/microsoft.extensions.configuration.iconfigurationbuilder?view=dotnet-plat-ext-5.0&preserve-view=true), which makes it easier to add multiple configuration items. Because `ConfigureAppConfiguration` returns the same instance of [`IConfiguration`](/dotnet/api/microsoft.extensions.configuration.iconfiguration?view=dotnet-plat-ext-5.0&preserve-view=true), you can also just call it multiple times to add multiple configuration items. You can access the full set of configurations from both [`HostBuilderContext.Configuration`](/dotnet/api/microsoft.extensions.hosting.hostbuildercontext.configuration?view=dotnet-plat-ext-5.0&preserve-view=true) and [`IHost.Services`](/dotnet/api/microsoft.extensions.hosting.ihost.services?view=dotnet-plat-ext-5.0&preserve-view=true).
+The `ConfigureAppConfiguration` method is used to configure the rest of the build process and application. This example also uses an [IConfigurationBuilder](/dotnet/api/microsoft.extensions.configuration.iconfigurationbuilder?view=dotnet-plat-ext-5.0&preserve-view=true), which makes it easier to add multiple configuration items. Because `ConfigureAppConfiguration` returns the same instance of [`IConfiguration`](/dotnet/api/microsoft.extensions.configuration.iconfiguration?view=dotnet-plat-ext-5.0&preserve-view=true), you can also just call it multiple times to add multiple configuration items.-->
+You can access the full set of configurations from both [`HostBuilderContext.Configuration`](/dotnet/api/microsoft.extensions.hosting.hostbuildercontext.configuration?view=dotnet-plat-ext-5.0&preserve-view=true) and [`IHost.Services`](/dotnet/api/microsoft.extensions.hosting.ihost.services?view=dotnet-plat-ext-5.0&preserve-view=true).
To learn more about configuration, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0&preserve-view=true).
This section describes the current state of the functional and behavioral differ
| Durable Functions | [Supported](durable/durable-functions-overview.md) | Not supported | | Imperative bindings | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | | function.json artifact | Generated | Not generated |
-| Configuration | [host.json](functions-host-json.md) | [host.json](functions-host-json.md) and [custom initialization](#configuration) |
+| Configuration | [host.json](functions-host-json.md) | [host.json](functions-host-json.md) and custom initialization |
| Dependency injection | [Supported](functions-dotnet-dependency-injection.md) | [Supported](#dependency-injection) | | Middleware | Not supported | Supported | | Cold start times | Typical | Longer, because of just-in-time start-up. Run on Linux instead of Windows to reduce potential delays. |
azure-functions Durable Functions Entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-entities.md
Entity functions define operations for reading and updating small pieces of stat
Entities provide a means for scaling out applications by distributing the work across many entities, each with a modestly sized state. > [!NOTE]
-> Entity functions and related functionality are only available in Durable Functions 2.0 and above. They are currently supported in .NET and JavaScript.
+> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET, JavaScript, and Python.
## General concepts
def entity_function(context: df.DurableEntityContext):
context.set_state(current_value) - main = df.Entity.create(entity_function) ```
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
module.exports = df.entity(function(context) {
# [Python](#tab/python)
-Durable entities are currently not supported in Python.
+```python
+import logging
+import json
+
+import azure.functions as func
+import azure.durable_functions as df
++
+def entity_function(context: df.DurableOrchestrationContext):
+
+ current_value = context.get_state(lambda: 0)
+ operation = context.operation_name
+ if operation == "add":
+ amount = context.get_input()
+ current_value += amount
+ context.set_result(current_value)
+ elif operation == "reset":
+ current_value = 0
+ elif operation == "get":
+ context.set_result(current_value)
+
+ context.set_state(current_value)
+
+main = df.Entity.create(entity_function)
+```
# [PowerShell](#tab/powershell)
module.exports = async function (context) {
# [Python](#tab/python)
-Durable entities are currently not supported in Python.
+```python
+import azure.functions as func
+import azure.durable_functions as df
++
+async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
+ client = df.DurableOrchestrationClient(starter)
+ entity_id = df.EntityId("Counter", "myCounter")
+ instance_id = await client.signal_entity(entity_id, "add", 1)
+ return func.HttpResponse("Entity signaled")
+```
# [PowerShell](#tab/powershell)
Durable entities are currently not supported in PowerShell.
-Entity functions are available in [Durable Functions 2.0](durable-functions-versions.md) and above for C# and JavaScript.
+Entity functions are available in [Durable Functions 2.0](durable-functions-versions.md) and above for C#, JavaScript, and Python.
## The technology
azure-functions Functions Bindings Event Hubs Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-hubs-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-event-hubs-trigger](../../includes/functions-bindings-event-hubs-trigger.md)]
+## host.json settings
+
+The [host.json](functions-host-json.md#eventhub) file contains settings that control Event Hub trigger behavior. See the [host.json settings](functions-bindings-event-hubs.md#hostjson-settings) section for details regarding available settings.
+ ## Next steps - [Write events to an event stream (Output binding)](./functions-bindings-event-hubs-output.md)
azure-functions Functions Bindings Event Iot Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-iot-trigger.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-event-hubs](../../includes/functions-bindings-event-hubs-trigger.md)]
+## host.json properties
+
+The [host.json](functions-host-json.md#eventhub) file contains settings that control Event Hub trigger behavior. See the [host.json settings](functions-bindings-event-iot.md#hostjson-settings) section for details regarding available settings.
+ ## Next steps - [Write events to an event stream (Output binding)](./functions-bindings-event-iot-output.md)
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-vnet.md
Title: Use private endpoints to integrate Azure Functions with a virtual network
-description: A step-by-step tutorial that shows you how to connect a function to an Azure virtual network and lock it down with private endpoints
+description: This tutorial shows you how to connect a function to an Azure virtual network and lock it down by using private endpoints.
Last updated 2/22/2021 #Customer intent: As an enterprise developer, I want to create a function that can connect to a virtual network with private endpoints to secure my function app.
-# Tutorial: Integrate Azure Functions with an Azure virtual network using private endpoints
+# Tutorial: Integrate Azure Functions with an Azure virtual network by using private endpoints
-This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network with private endpoints. You'll create a function with a storage account locked behind a virtual network that uses a service bus queue trigger.
+This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The virtual network uses a service bus queue trigger.
+
+In this tutorial, you'll:
> [!div class="checklist"]
-> * Create a function app in the Premium plan
-> * Create Azure resources (Service Bus, Storage Account, Virtual Network)
-> * Lock down your Storage account behind a private endpoint
-> * Lock down your Service Bus behind a private endpoint
-> * Deploy a function app with both Service Bus and HTTP triggers.
-> * Lock down your function app behind a private endpoint
-> * Test to see that your function app is secure behind the virtual network
-> * Clean up resources
+> * Create a function app in the Premium plan.
+> * Create Azure resources, such as the service bus, storage account, and virtual network.
+> * Lock down your storage account behind a private endpoint.
+> * Lock down your service bus behind a private endpoint.
+> * Deploy a function app that uses both the service bus and HTTP triggers.
+> * Lock down your function app behind a private endpoint.
+> * Test to see that your function app is secure inside the virtual network.
+> * Clean up resources.
## Create a function app in a Premium plan
-First, you create a .NET function app in the [Premium plan] as this tutorial will use C#. Other languages are also supported in Windows. This plan provides serverless scale while supporting virtual network integration.
+You'll create a .NET function app in the Premium plan because this tutorial uses C#. Other languages are also supported in Windows. The Premium plan provides serverless scale while supporting virtual network integration.
-1. From the Azure portal menu or the **Home** page, select **Create a resource**.
+1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-1. In the **New** page, select **Compute** > **Function App**.
+1. On the **New** page, select **Compute** > **Function App**.
-1. On the **Basics** page, use the function app settings as specified in the following table:
+1. On the **Basics** page, use the following table to configure the function app settings.
| Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | Your subscription | The subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Name for the new resource group in which to create your function app. |
+ | **Subscription** | Your subscription | Subscription under which this new function app is created. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you'll create your function app. |
| **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. |
- |**Publish**| Code | Option to publish code files or a Docker container. |
- | **Runtime stack** | .NET | This tutorial uses .NET |
- |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services your functions access. |
+ |**Publish**| Code | Choose to publish code files or a Docker container. |
+ | **Runtime stack** | .NET | This tutorial uses .NET. |
+ |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
-1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings:
+1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings.
| Setting | Suggested value | Description | | | - | -- |
- | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](./storage-considerations.md#storage-account-requirements). |
- |**Operating system**| Windows | This tutorial uses Windows |
- | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. Select **Premium**. By default, a new App Service plan is created. The default **Sku and size** is **EP1**, where EP stands for _elastic premium_. To learn more, see the [list of Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/>When running JavaScript functions on a Premium plan, you should choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
+ | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters long. They may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](./storage-considerations.md#storage-account-requirements). |
+ |**Operating system**| Windows | This tutorial uses Windows. |
+ | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. By default, when you select **Premium**, a new App Service plan is created. The default **Sku and size** is **EP1**, where *EP* stands for _elastic premium_. For more information, see the list of [Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/><br/>When you run JavaScript functions on a Premium plan, choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
-1. Select **Next: Monitoring**. On the **Monitoring** page, enter the following settings:
+1. Select **Next: Monitoring**. On the **Monitoring** page, enter the following settings.
| Setting | Suggested value | Description | | | - | -- |
- | **[Application Insights](./functions-monitoring.md)** | Default | Creates an Application Insights resource of the same *App name* in the nearest supported region. By expanding this setting, you can change the **New resource name** or choose a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) to store your data. |
+ | **[Application Insights](./functions-monitoring.md)** | Default | Create an Application Insights resource of the same app name in the nearest supported region. Expand this setting if you need to change the **New resource name** or store your data in a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/). |
1. Select **Review + create** to review the app configuration selections.
-1. On the **Review + create** page, review your settings, and then select **Create** to provision and deploy the function app.
+1. On the **Review + create** page, review your settings. Then select **Create** to provision and deploy the function app.
-1. Select the **Notifications** icon in the upper-right corner of the portal and watch for the **Deployment succeeded** message.
+1. In the upper-right corner of the portal, select the **Notifications** icon and watch for the **Deployment succeeded** message.
1. Select **Go to resource** to view your new function app. You can also select **Pin to dashboard**. Pinning makes it easier to return to this function app resource from your dashboard.
-1. Congratulations! You've successfully created your premium function app!
+Congratulations! You've successfully created your premium function app.
## Create Azure resources
+Next, you'll create a storage account, a service bus, and a virtual network.
### Create a storage account
-A separate storage account from the one created in the initial creation of your function app is required for virtual networks.
+Your virtual networks will need a storage account that's separate from the one you created with your function app.
-1. From the Azure portal menu or the **Home** page, select **Create a resource**.
+1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-1. In the New page, search for **Storage Account** and select **Create**
+1. On the **New** page, search for *storage account*. Then select **Create**.
-1. On the **Basics** tab, set the settings as specified in the table below. The rest can be left as default:
+1. On the **Basics** tab, use the following table to configure the storage account settings. All other settings can use the default values.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
- | **Name** | mysecurestorage| The name of your storage account to which the private endpoint will be applied to. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your function app in. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
+ | **Name** | mysecurestorage| The name of the storage account that the private endpoint will be applied to. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
-1. Select **Review + create**. After validation completes, select **Create**.
+1. Select **Review + create**. After validation finishes, select **Create**.
-### Create a Service Bus
+### Create a service bus
-1. From the Azure portal menu or the **Home** page, select **Create a resource**.
+1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-1. In the New page, search for **Service Bus** and select **Create**.
+1. On the **New** page, search for *service bus*. Then select **Create**.
-1. On the **Basics** tab, set the settings as specified in the table below. The rest can be left as default:
+1. On the **Basics** tab, use the following table to configure the service bus settings. All other settings can use the default values.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
- | **Name** | myServiceBus| The name of your service bus to which the private endpoint will be applied to. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your function app in. |
- | **Pricing tier** | Premium | Choose this tier to use private endpoints with Service Bus. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
+ | **Name** | myServiceBus| The name of the service bus that the private endpoint will be applied to. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
+ | **Pricing tier** | Premium | Choose this tier to use private endpoints with Azure Service Bus. |
-1. Select **Review + create**. After validation completes, select **Create**.
+1. Select **Review + create**. After validation finishes, select **Create**.
### Create a virtual network
-Azure resources in this tutorial either integrate with or are placed within a virtual network. You'll use private endpoints to keep network traffic contained with the virtual network.
+The Azure resources in this tutorial either integrate with or are placed within a virtual network. You'll use private endpoints to contain network traffic within the virtual network.
The tutorial creates two subnets: - **default**: Subnet for private endpoints. Private IP addresses are given from this subnet. - **functions**: Subnet for Azure Functions virtual network integration. This subnet is delegated to the function app.
-Now, create the virtual network to which the function app integrates.
+Create the virtual network to which the function app integrates:
-1. From the Azure portal menu or the Home page, select **Create a resource**.
+1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-1. In the New page, search for **Virtual Network** and select **Create**.
+1. On the **New** page, search for *virtual network*. Then select **Create**.
-1. On the **Basics** tab, use the virtual network settings as specified below:
+1. On the **Basics** tab, use the following table to configure the virtual network settings.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
- | **Name** | myVirtualNet| The name of your virtual network to which your function app will connect. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your function app in. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
+ | **Name** | myVirtualNet| The name of the virtual network that your function app will connect to. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
-1. On the **IP Addresses** tab, select **Add subnet**. Use the settings as specified below when adding a subnet:
+1. On the **IP Addresses** tab, select **Add subnet**. Use the following table to configure the subnet settings.
- :::image type="content" source="./media/functions-create-vnet/1-create-vnet-ip-address.png" alt-text="Screenshot of the create virtual network configuration view.":::
+ :::image type="content" source="./media/functions-create-vnet/1-create-vnet-ip-address.png" alt-text="Screenshot of the Create virtual network configuration view.":::
| Setting | Suggested value | Description | | | - | - | | **Subnet name** | functions | The name of the subnet your function app will connect to. |
- | **Subnet address range** | 10.0.1.0/24 | Notice our IPv4 address space in the image above is 10.0.0.0/16. If the above was 10.1.0.0/16, the recommended *Subnet address range* would be 10.1.1.0/24. |
+ | **Subnet address range** | 10.0.1.0/24 | The subnet address range. In the preceding image, notice that the IPv4 address space is 10.0.0.0/16. If the value were 10.1.0.0/16, the recommended subnet address range would be 10.1.1.0/24. |
+
+1. Select **Review + create**. After validation finishes, select **Create**.
-1. Select **Review + create**. After validation completes, select **Create**.
+## Lock down your storage account
-## Lock down your storage account with private endpoints
+Azure private endpoints are used to connect to specific Azure resources by using a private IP address. This connection ensures that network traffic remains within the chosen virtual network and access is available only for specific resources.
-Azure Private Endpoints are used to connect to specific Azure resources using a private IP address. This connection ensures that network traffic remains within the chosen virtual network, and access is available only for specific resources. Now, create the private endpoints for Azure File storage and Azure Blob storage with your storage account.
+Create the private endpoints for Azure Files storage and Azure Blob Storage by using your storage account:
-1. In your new storage account, select **Networking** in the left menu.
+1. In your new storage account, in the menu on the left, select **Networking**.
-1. Select the **Private endpoint connections** tab, and select **Private endpoint**.
+1. On the **Private endpoint connections** tab, select **Private endpoint**.
- :::image type="content" source="./media/functions-create-vnet/2-navigate-private-endpoint-store.png" alt-text="Screenshot of how to navigate to create private endpoints for the storage account.":::
+ :::image type="content" source="./media/functions-create-vnet/2-navigate-private-endpoint-store.png" alt-text="Screenshot of how to create private endpoints for the storage account.":::
-1. On the **Basics** tab, use the private endpoint settings as specified below:
+1. On the **Basics** tab, use the private endpoint settings shown in the following table.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. | | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. | | | **Name** | file-endpoint | The name of the private endpoint for files from your storage account. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your storage account in. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region where you created your storage account. |
-1. On the **Resource** tab, use the private endpoint settings as specified below:
+1. On the **Resource** tab, use the private endpoint settings shown in the following table.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | This is the resource type for storage accounts. |
- | **Resource** | mysecurestorage | The storage account you just created |
- | **Target sub-resource** | file | This private endpoint will be used for files from the storage account. |
+ | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
+ | **Resource** | mysecurestorage | The storage account you created. |
+ | **Target sub-resource** | file | The private endpoint that will be used for files from the storage account. |
-1. On the **Configuration** tab, choose **default** for the Subnet setting.
+1. On the **Configuration** tab, for the **Subnet** setting, choose **default**.
-1. Select **Review + create**. After validation completes, select **Create**. Resources in the virtual network can now talk to storage files.
+1. Select **Review + create**. After validation finishes, select **Create**. Resources in the virtual network can now communicate with storage files.
-1. Create another private endpoint for blobs. For the **Resources** tab, use the below settings. For all other settings, use the same settings from the file private endpoint creation steps you just followed.
+1. Create another private endpoint for blobs. On the **Resources** tab, use the settings shown in the following table. For all other settings, use the same values you used to create the private endpoint for files.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | This is the resource type for storage accounts. |
- | **Resource** | mysecurestorage | The storage account you just created |
- | **Target sub-resource** | blob | This private endpoint will be used for blobs from the storage account. |
+ | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
+ | **Resource** | mysecurestorage | The storage account you created. |
+ | **Target sub-resource** | blob | The private endpoint that will be used for blobs from the storage account. |
-## Lock down your service bus with a private endpoint
+## Lock down your service bus
-Now, create the private endpoint for your Azure Service Bus.
+Create the private endpoint to lock down your service bus:
-1. In your new service bus, select **Networking** in the left menu.
+1. In your new service bus, in the menu on the left, select **Networking**.
-1. Select the **Private endpoint connections** tab, and select **Private endpoint**.
+1. On the **Private endpoint connections** tab, select **Private endpoint**.
- :::image type="content" source="./media/functions-create-vnet/3-navigate-private-endpoint-service-bus.png" alt-text="Screenshot of how to navigate to private endpoints for service bus.":::
+ :::image type="content" source="./media/functions-create-vnet/3-navigate-private-endpoint-service-bus.png" alt-text="Screenshot of how to go to private endpoints for the service bus.":::
-1. On the **Basics** tab, use the private endpoint settings as specified below:
+1. On the **Basics** tab, use the private endpoint settings shown in the following table.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
+ | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
| **Name** | sb-endpoint | The name of the private endpoint for files from your storage account. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region you created your storage account in. |
+ | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your storage account. |
-1. On the **Resource** tab, use the private endpoint settings as specified below:
+1. On the **Resource** tab, use the private endpoint settings shown in the following table.
| Setting | Suggested value | Description | | | - | - | | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.ServiceBus/namespaces | This is the resource type for Service Bus. |
- | **Resource** | myServiceBus | The Service Bus you created earlier in the tutorial. |
- | **Target subresource** | namespace | This private endpoint will be used for the namespace from the service bus. |
+ | **Resource type** | Microsoft.ServiceBus/namespaces | The resource type for the service bus. |
+ | **Resource** | myServiceBus | The service bus you created earlier in the tutorial. |
+ | **Target subresource** | namespace | The private endpoint that will be used for the namespace from the service bus. |
+
+1. On the **Configuration** tab, for the **Subnet** setting, choose **default**.
-1. On the **Configuration** tab, choose **default** for the Subnet setting.
+1. Select **Review + create**. After validation finishes, select **Create**.
-1. Select **Review + create**. After validation completes, select **Create**. Resources in the virtual network can now talk to service bus.
+Resources in the virtual network can now communicate with the service bus.
## Create a file share
-1. In the storage account you created, select **File shares** in the left menu.
+1. In the storage account you created, in the menu on the left, select **File shares**.
-1. Select **+ File shares**. Provide **files** as the name for the file share for the purposes of this tutorial.
+1. Select **+ File shares**. For the purposes of this tutorial, name the file share *files*.
:::image type="content" source="./media/functions-create-vnet/4-create-file-share.png" alt-text="Screenshot of how to create a file share in the storage account.":::
-## Get storage account connection string
+1. Select **Create**.
-1. In the storage account you created, select **Access keys** in the left menu.
+## Get the storage account connection string
-1. Select **Show keys**. Copy the connection string of key1, and save it. We'll need this connection string later when configuring the app settings.
+1. In the storage account you created, in the menu on the left, select **Access keys**.
+
+1. Select **Show keys**. Copy and save the connection string of **key1**. You'll need this connection string when you configure the app settings.
:::image type="content" source="./media/functions-create-vnet/5-get-store-connection-string.png" alt-text="Screenshot of how to get a storage account connection string."::: ## Create a queue
-This will be the queue for which your Azure Functions Service Bus Trigger will get events from.
+Create the queue where your Azure Functions service bus trigger will get events:
-1. In your service bus, select **Queues** in the left menu.
+1. In your service bus, in the menu on the left, select **Queues**.
-1. Select **Shared access policies**. Provide **queue** as the name for the queue for the purposes of this tutorial.
+1. Select **Shared access policies**. For the purposes of this tutorial, name the list *queue*.
:::image type="content" source="./media/functions-create-vnet/6-create-queue.png" alt-text="Screenshot of how to create a service bus queue.":::
-## Get service bus connection string
+1. Select **Create**.
+
+## Get a service bus connection string
-1. In your service bus, select **Shared access policies** in the left menu.
+1. In your service bus, in the menu on the left, select **Shared access policies**.
-1. Select **RootManageSharedAccessKey**. Copy the **Primary Connection String**, and save it. We'll need this connection string later when configuring the app settings.
+1. Select **RootManageSharedAccessKey**. Copy and save the **Primary Connection String**. You'll need this connection string when you configure the app settings.
:::image type="content" source="./media/functions-create-vnet/7-get-service-bus-connection-string.png" alt-text="Screenshot of how to get a service bus connection string.":::
-## Integrate function app with your virtual network
+## Integrate the function app
-To use your function app with virtual networks, you'll need to join it to a subnet. We use a specific subnet for the Azure Functions virtual network integration and the default sub net for all other private endpoints created in this tutorial.
+To use your function app with virtual networks, you need to join it to a subnet. You'll use a specific subnet for the Azure Functions virtual network integration. You'll use the default subnet for other private endpoints you create in this tutorial.
-1. In your function app, select **Networking** in the left menu.
+1. In your function app, in the menu on the left, select **Networking**.
-1. Select **Click here to configure** under VNet Integration.
+1. Under **VNet Integration**, select **Click here to configure**.
- :::image type="content" source="./media/functions-create-vnet/8-connect-app-vnet.png" alt-text="Screenshot of how to navigate to virtual network integration.":::
+ :::image type="content" source="./media/functions-create-vnet/8-connect-app-vnet.png" alt-text="Screenshot of how to go to virtual network integration.":::
-1. Select **Add VNet**
+1. Select **Add VNet**.
-1. In the blade that opens up under **Virtual Network**, select the virtual network you created earlier.
+1. Under **Virtual Network**, select the virtual network you created earlier.
-1. Select the **Subnet** we created earlier called **functions**. Your function app is now integrated with your virtual network!
+1. Select the **functions** subnet you created earlier. Your function app is now integrated with your virtual network!
:::image type="content" source="./media/functions-create-vnet/9-connect-app-subnet.png" alt-text="Screenshot of how to connect a function app to a subnet.":::
-## Configure your function app settings for private endpoints
+## Configure your function app settings
-1. In your function app, select **Configuration** from the left menu.
+1. In your function app, in the menu on the left, select **Configuration**.
-1. To use your function app with virtual networks, the following app settings will need to be updated. Select **+ New application setting** or the pencil by **Edit** in the rightmost column of the app settings table as appropriate. When done, select **Save**.
+1. To use your function app with virtual networks, update the app settings shown in the following table. To add or edit a setting, select **+ New application setting** or the **Edit** icon in the rightmost column of the app settings table. When you finish, select **Save**.
:::image type="content" source="./media/functions-create-vnet/10-configure-app-settings.png" alt-text="Screenshot of how to configure function app settings for private endpoints."::: | Setting | Suggested value | Description | | | - | - |
- | **AzureWebJobsStorage** | mysecurestorageConnectionString | The connection string of the storage account you created. This is the storage connection string from [Get storage account connection string](#get-storage-account-connection-string). By changing this setting, your function app will now use the secure storage account for normal operations at runtime. |
- | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | mysecurestorageConnectionString | The connection string of the storage account you created. By changing this setting, your function app will now use the secure storage account for Azure Files, which are used when deploying. |
- | **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. This app setting is used in conjunction with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. |
- | **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create an app setting for the connection string of your service bus. This is the storage connection string from [Get service bus connection string](#get-service-bus-connection-string).|
- | **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. |
- | **WEBSITE_DNS_SERVER** | 168.63.129.16 | Create this app setting. Once your app integrates with a virtual network, it will use the same DNS server as the virtual network. This is one of two settings needed have your function app work with Azure DNS private zones and are required when using private endpoints. These settings will send all outbound calls from your app into your virtual network. |
- | **WEBSITE_VNET_ROUTE_ALL** | 1 | Create this app setting. Once your app integrates with a virtual network, it will use the same DNS server as the virtual network. This is one of two settings needed have your function app work with Azure DNS private zones and are required when using private endpoints. These settings will send all outbound calls from your app into your virtual network. |
+ | **AzureWebJobsStorage** | mysecurestorageConnectionString | The connection string of the storage account you created. This storage connection string is from the [Get the storage account connection string](#get-the-storage-account-connection-string) section. This setting allows your function app to use the secure storage account for normal operations at runtime. |
+ | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | mysecurestorageConnectionString | The connection string of the storage account you created. This setting allows your function app to use the secure storage account for Azure Files, which is used during deployment. |
+ | **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. Use this setting with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. |
+ | **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create this app setting for the connection string of your service bus. This storage connection string is from the [Get a service bus connection string](#get-a-service-bus-connection-string) section.|
+ | **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when your storage account is restricted to a virtual network. |
+ | **WEBSITE_DNS_SERVER** | 168.63.129.16 | Create this app setting. When your app integrates with a virtual network, it will use the same DNS server as the virtual network. Your function app needs this setting so it can work with Azure DNS private zones. It's required when you use private endpoints. This setting and WEBSITE_VNET_ROUTE_ALL will send all outbound calls from your app into your virtual network. |
+ | **WEBSITE_VNET_ROUTE_ALL** | 1 | Create this app setting. When your app integrates with a virtual network, it uses the same DNS server as the virtual network. Your function app needs this setting so it can work with Azure DNS private zones. It's required when you use private endpoints. This setting and WEBSITE_DNS_SERVER will send all outbound calls from your app into your virtual network. |
-1. Staying on the **Configuration** view, select the **Function runtime settings** tab.
+1. In the **Configuration** view, select the **Function runtime settings** tab.
-1. Set **Runtime Scale Monitoring** to **On**, and select **Save**. Runtime driven scaling allows you to connect non-HTTP trigger functions to services running inside your virtual network.
+1. Set **Runtime Scale Monitoring** to **On**. Then select **Save**. Runtime-driven scaling allows you to connect non-HTTP trigger functions to services that run inside your virtual network.
- :::image type="content" source="./media/functions-create-vnet/11-enable-runtime-scaling.png" alt-text="Screenshot of how to enable Runtime Driven Scaling for Azure Functions.":::
+ :::image type="content" source="./media/functions-create-vnet/11-enable-runtime-scaling.png" alt-text="Screenshot of how to enable runtime-driven scaling for Azure Functions.":::
-## Deploy a service bus trigger and http trigger to your function app
+## Deploy a service bus trigger and HTTP trigger
-1. In GitHub, browse to the following sample repository, which contains a function app with two functions, an HTTP Trigger and a Service Bus Queue Trigger.
+1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP trigger, and a service bus queue trigger.
<https://github.com/Azure-Samples/functions-vnet-tutorial>
-1. At the top of the page, select the **Fork** button to create a fork of this repository in your own GitHub account or organization.
+1. At the top of the page, select **Fork** to create a fork of this repository in your own GitHub account or organization.
-1. In your function app, select **Deployment Center** from the left menu. Then, select **Settings**.
+1. In your function app, in the menu on the left, select **Deployment Center**. Then select **Settings**.
-1. On the **Settings** tab, use the deployment settings as specified below:
+1. On the **Settings** tab, use the deployment settings shown in the following table.
| Setting | Suggested value | Description | | | - | - |
- | **Source** | GitHub | You should have created a GitHub repo with the sample code in step 2. |
- | **Organization** | myOrganization | This is the organization your repo is checked into, usually your account. |
- | **Repository** | myRepo | The repo you created with the sample code. |
- | **Branch** | main | This is the repo you just created, so use the main branch. |
+ | **Source** | GitHub | You should have created a GitHub repository for the sample code in step 2. |
+ | **Organization** | myOrganization | The organization your repo is checked into. It's usually your account. |
+ | **Repository** | myRepo | The repository you created for the sample code. |
+ | **Branch** | main | The main branch of the repository you created. |
| **Runtime stack** | .NET | The sample code is in C#. | 1. Select **Save**. :::image type="content" source="./media/functions-create-vnet/12-deploy-portal.png" alt-text="Screenshot of how to deploy Azure Functions code through the portal.":::
-1. Your initial deployment may take a few minutes. You will see a **Success (Active)** Status message in the **Logs** tab when your app is successfully deployed. If needed, refresh the page.
+1. Your initial deployment might take a few minutes. When your app is successfully deployed, on the **Logs** tab, you see a **Success (Active)** status message. If necessary, refresh the page.
-1. Congratulations! You have successfully deployed your sample function app.
+Congratulations! You've successfully deployed your sample function app.
-## Lock down your function app with a private endpoint
+## Lock down your function app
-Now, create the private endpoint for your function app. This private endpoint will connect your function app privately and securely to your virtual network using a private IP address. For more information on private endpoints, go to the [private endpoints documentation](https://docs.microsoft.com/azure/private-link/private-endpoint-overview).
+Now create the private endpoint to lock down your function app. This private endpoint will connect your function app privately and securely to your virtual network by using a private IP address.
-1. In your function app, select **Networking** in the left menu.
+For more information, see the [private endpoint documentation](https://docs.microsoft.com/azure/private-link/private-endpoint-overview).
-1. Select **Click here to configure** under Private Endpoint Connections.
+1. In your function app, in the menu on the left, select **Networking**.
- :::image type="content" source="./media/functions-create-vnet/14-navigate-app-private-endpoint.png" alt-text="Screenshot of how to navigate to a Function App Private Endpoint.":::
+1. Under **Private Endpoint Connections**, select **Configure your private endpoint connections**.
+
+ :::image type="content" source="./media/functions-create-vnet/14-navigate-app-private-endpoint.png" alt-text="Screenshot of how to navigate to a function app private endpoint.":::
1. Select **Add**.
-1. On the menu that opens up, use the private endpoint settings as specified below:
+1. On the pane that opens, use the following private endpoint settings:
- :::image type="content" source="./media/functions-create-vnet/15-create-app-private-endpoint.png" alt-text="Screenshot of how to create a Function App private endpoint.":::
+ :::image type="content" source="./media/functions-create-vnet/15-create-app-private-endpoint.png" alt-text="Screenshot of how to create a function app private endpoint. The name is functionapp-endpoint. The subscription is 'Private Test Sub CACHHAI'. The virtual network is MyVirtualNet-tutorial. The subnet is default.":::
-1. Select **Ok** to add the private endpoint. Congratulations! You've successfully secured your function app, service bus, and storage account with private endpoints!
+1. Select **OK** to add the private endpoint.
+
+Congratulations! You've successfully secured your function app, service bus, and storage account by adding private endpoints!
-### Test your locked down function app
+### Test your locked-down function app
-1. In your function app, select **Functions** from the left menu.
+1. In your function app, in the menu on the left, select **Functions**.
-1. Select the **ServiceBusQueueTrigger**.
+1. Select **ServiceBusQueueTrigger**.
-1. From the left menu, select **Monitor**. you'll see that you're unable to monitor your app. This is because your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network. We'll now demonstrate another method by which you can still monitor your function, Application Insights.
+1. In the menu on the left, select **Monitor**.
+
+You'll see that you can't monitor your app. Your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network.
+
+Here's an alternative way to monitor your function by using Application Insights:
-1. In your function app, select **Application Insights** from the left menu and select **View Application Insights data**.
+1. In your function app, in the menu on the left, select **Application Insights**. Then select **View Application Insights data**.
- :::image type="content" source="./media/functions-create-vnet/16-app-insights.png" alt-text="Screenshot of how to view application insights for a Function App.":::
+ :::image type="content" source="./media/functions-create-vnet/16-app-insights.png" alt-text="Screenshot of how to view application insights for a function app.":::
-1. Select **Live metrics** from the left menu.
+1. In the menu on the left, select **Live metrics**.
-1. Open a new tab. In your Service Bus, select **Queues** from the left menu.
+1. Open a new tab. In your service bus, in the menu on the left, select **Queues**.
1. Select your queue.
-1. Select **Service Bus Explorer** from the left menu. Under **Send**, choose **Text/Plain** as the **Content Type** and enter a message.
+1. In the menu on the left, select **Service Bus Explorer**. Under **Send**, for **Content Type**, choose **Text/Plain**. Then enter a message.
1. Select **Send** to send the message.
- :::image type="content" source="./media/functions-create-vnet/17-send-service-bus-message.png" alt-text="Screenshot of how to send Service Bus messages using portal.":::
+ :::image type="content" source="./media/functions-create-vnet/17-send-service-bus-message.png" alt-text="Screenshot of how to send service bus messages by using the portal.":::
-1. On the tab with **Live metrics** open, you should see that your Service Bus queue trigger has triggered. If it hasn't, resend the message from **Service Bus Explorer**
+1. On the **Live metrics** tab, you should see that your service bus queue trigger has fired. If it hasn't, resend the message from **Service Bus Explorer**.
- :::image type="content" source="./media/functions-create-vnet/18-hello-world.png" alt-text="Screenshot of how to view messages using live metrics for function apps.":::
+ :::image type="content" source="./media/functions-create-vnet/18-hello-world.png" alt-text="Screenshot of how to view messages by using live metrics for function apps.":::
-1. Congratulations! You've successfully tested your function app set up with private endpoints!
+Congratulations! You've successfully tested your function app setup with private endpoints.
-### Private DNS Zones
-Using a private endpoint to connect to Azure resources means connecting to a private IP address instead of the public endpoint. Existing Azure services are configured to use existing DNS to connect to the public endpoint. The DNS configuration will need to be overridden to connect to the private endpoint.
+## Understand private DNS zones
+You've used a private endpoint to connect to Azure resources. You're connecting to a private IP address instead of the public endpoint. Existing Azure services are configured to use an existing DNS to connect to the public endpoint. You must override the DNS configuration to connect to the private endpoint.
-A private DNS zone was created for each Azure resource configured with a private endpoint. A DNS A record is created for each private IP address associated with the private endpoint.
+A private DNS zone is created for each Azure resource that was configured with a private endpoint. A DNS record is created for each private IP address associated with the private endpoint.
The following DNS zones were created in this tutorial:
The following DNS zones were created in this tutorial:
## Next steps
-In this tutorial, you created a Premium function app, storage account, and Service Bus, and you secured them all behind private endpoints! Learn more about the various networking features available below:
+In this tutorial, you created a Premium function app, storage account, and service bus. You secured all of these resources behind private endpoints.
+
+Use the following links to learn more about the available networking features:
> [!div class="nextstepaction"]
-> [Learn more about the networking options in Functions](./functions-networking-options.md)
+> [Networking options in Azure Functions](./functions-networking-options.md)
-[Premium plan]: functions-premium-plan.md
+
+> [!div class="nextstepaction"]
+> [Azure Functions Premium plan](./functions-premium-plan.md)
azure-functions Functions Host Json V1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json-v1.md
Configuration settings for the [Azure Cosmos DB trigger and bindings](functions-
## eventHub
-Configuration settings for [Event Hub triggers and bindings](functions-bindings-event-hubs-trigger.md#functions-1x).
+Configuration settings for [Event Hub triggers and bindings](functions-bindings-event-hubs.md#functions-1x).
## functions
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
Configuration setting can be found in [bindings for Durable Functions](durable/d
## eventHub
-Configuration settings can be found in [Event Hub triggers and bindings](functions-bindings-event-hubs-trigger.md#host-json).
+Configuration settings can be found in [Event Hub triggers and bindings](functions-bindings-event-hubs.md#host-json).
## extensions
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-node.md
description: Understand how to develop functions by using JavaScript.
ms.assetid: 45dedd78-3ff9-411f-bb4b-16d29a11384c Previously updated : 11/17/2020 Last updated : 03/07/2021 # Azure Functions JavaScript developer guide
The following table shows current supported Node.js versions for each major vers
| Functions version | Node version (Windows) | Node Version (Linux) | ||| |
+| 3.x (recommended) | `~14` (recommended)<br/>`~12`<br/>`~10` | `node|14` (recommended)<br/>`node|12`<br/>`node|10` |
+| 2.x | `~12`<br/>`~10`<br/>`~8` | `node|10`<br/>`node|8` |
| 1.x | 6.11.2 (locked by the runtime) | n/a |
-| 2.x | `~8`<br/>`~10` (recommended)<br/>`~12` | `node|8`<br/>`node|10` (recommended) |
-| 3.x | `~10`<br/>`~12` (recommended)<br/>`~14` (preview) | `node|10`<br/>`node|12` (recommended)<br/>`node|14` (preview) |
You can see the current version that the runtime is using by logging `process.version` from any function. ### Setting the Node version
-For Windows function apps, target the version in Azure by setting the `WEBSITE_NODE_DEFAULT_VERSION` [app setting](functions-how-to-use-azure-function-app-settings.md#settings) to a supported LTS version, such as `~12`.
+For Windows function apps, target the version in Azure by setting the `WEBSITE_NODE_DEFAULT_VERSION` [app setting](functions-how-to-use-azure-function-app-settings.md#settings) to a supported LTS version, such as `~14`.
For Linux function apps, run the following Azure CLI command to update the Node version. ```bash
-az functionapp config set --linux-fx-version "node|12" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>"
+az functionapp config set --linux-fx-version "node|14" --name "<MY_APP_NAME>" --resource-group "<MY_RESOURCE_GROUP_NAME>"
``` ## Dependency management
module.exports = async function (context, myTimer) {
}; ```
+## <a name="ecmascript-modules"></a>ECMAScript modules (preview)
+
+> [!NOTE]
+> As ECMAScript modules are currently labeled *experimental* in Node.js 14, they're available as a preview feature in Node.js 14 Azure Functions. Until Node.js 14 support for ECMAScript modules becomes *stable*, expect possible changes to its API or behavior.
+
+[ECMAScript modules](https://nodejs.org/docs/latest-v14.x/api/esm.html#esm_modules_ecmascript_modules) (ES modules) are the new official standard module system for Node.js. So far, the code samples in this article use the CommonJS syntax. When running Azure Functions in Node.js 14, you can choose to write your functions using ES modules syntax.
+
+To use ES modules in a function, change its filename to use a `.mjs` extension. The following *index.mjs* file example is an HTTP triggered function that uses ES modules syntax to import the `uuid` library and return a value.
+
+```js
+import { v4 as uuidv4 } from 'uuid';
+
+export default async function (context, req) {
+ context.res.body = uuidv4();
+};
+```
+ ## Configure function entry point The `function.json` properties `scriptFile` and `entryPoint` can be used to configure the location and name of your exported function. These properties can be important when your JavaScript is transpiled.
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
Some connections in Azure Functions are configured to use an identity instead of
Identity-based connections are supported by the following trigger and binding extensions:
-| Extension name | Extension version | Supports identity-based connections in the Consumption plan |
+| Extension name | Extension version | Supported in the Consumption plan |
|-|-|| | Azure Blob | [Version 5.0.0-beta1 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) | No | | Azure Queue | [Version 5.0.0-beta1 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) | No |
+| Azure Event Hubs | [Version 5.0.0-beta1 or later](./functions-bindings-event-hubs.md#event-hubs-extension-5x-and-higher) | No |
> [!NOTE] > Support for identity-based connections is not yet available for storage connections used by the Functions runtime for core behaviors. This means that the `AzureWebJobsStorage` setting must be a connection string.
Identity-based connections are supported by the following trigger and binding ex
An identity-based connection for an Azure service accepts the following properties:
-| Property | Environment variable | Is Required | Description |
+| Property | Required for Extensions | Environment variable | Description |
|||||
-| Service URI | `<CONNECTION_NAME_PREFIX>__serviceUri` | Yes | The data plane URI of the service to which you are connecting. |
+| Service URI | Azure Blob, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. |
+| Fully Qualified Namespace | Event Hubs | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hub namespace. |
Additional options may be supported for a given connection type. Please refer to the documentation for the component making the connection.
In some cases, you may wish to specify use of a different identity. You can add
> [!NOTE] > The following configuration options are not supported when hosted in the Azure Functions service.
-To connect using an Azure Active Directory service principal with a client ID and secret, define the connection with the following properties:
+To connect using an Azure Active Directory service principal with a client ID and secret, define the connection with the following required properties in addition to the [Connection properties](#connection-properties) above:
-| Property | Environment variable | Is Required | Description |
-|||||
-| Service URI | `<CONNECTION_NAME_PREFIX>__serviceUri` | Yes | The data plane URI of the service to which you are connecting. |
-| Tenant ID | `<CONNECTION_NAME_PREFIX>__tenantId` | Yes | The Azure Active Directory tenant (directory) ID. |
-| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | Yes | The client (application) ID of an app registration in the tenant. |
-| Client secret | `<CONNECTION_NAME_PREFIX>__clientSecret` | Yes | A client secret that was generated for the app registration. |
+| Property | Environment variable | Description |
+||||
+| Tenant ID | `<CONNECTION_NAME_PREFIX>__tenantId` | The Azure Active Directory tenant (directory) ID. |
+| Client ID | `<CONNECTION_NAME_PREFIX>__clientId` | The client (application) ID of an app registration in the tenant. |
+| Client secret | `<CONNECTION_NAME_PREFIX>__clientSecret` | A client secret that was generated for the app registration. |
+
+Example of `local.settings.json` properties required for identity-based connection with Azure Blob:
+```json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "<CONNECTION_NAME_PREFIX>__serviceUri": "<serviceUri>",
+ "<CONNECTION_NAME_PREFIX>__tenantId": "<tenantId>",
+ "<CONNECTION_NAME_PREFIX>__clientId": "<clientId>",
+ "<CONNECTION_NAME_PREFIX>__clientSecret": "<clientSecret>"
+ }
+}
+```
#### Grant permission to the identity
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-get-started-connect-with-ps.md
When you start PowerShell, you have to tell Azure PowerShell to connect to Azure
| | | | [Azure](/powershell/module/az.accounts/Connect-AzAccount) commands |`Connect-AzAccount -EnvironmentName AzureUSGovernment` | | [Azure Active Directory](/powershell/module/azuread/connect-azuread) commands |`Connect-AzureAD -AzureEnvironmentName AzureUSGovernment` |
-| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure.service/add-azureaccount?view=azuresmps-3.7.0) commands |`Add-AzureAccount -Environment AzureUSGovernment` |
+| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure.service/add-azureaccount) commands |`Add-AzureAccount -Environment AzureUSGovernment` |
| [Azure Active Directory (Classic deployment model)](/previous-versions/azure/jj151815(v=azure.100)) commands |`Connect-MsolService -AzureEnvironment UsGovernment` | ![Connect to Azure Government](./media/connect-with-powershell/connect-with-powershell.png)
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux-troubleshoot.md
We've seen that a clean re-install of the Agent will fix most issues. In fact th
>[!NOTE] >Editing configuration files for performance counters and Syslog is overwritten if the collection is configured from the [data menu Log Analytics Advanced Settings](../agents/agent-data-sources.md#configuring-data-sources) in the Azure portal for your workspace. To disable configuration for all agents, disable collection from Log Analytics **Advanced Settings** or for a single agent run the following:
-> `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'`
+> `sudo /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable && sudo rm /etc/opt/omi/conf/omsconfig/configuration/Current.mof* /etc/opt/omi/conf/omsconfig/configuration/Pending.mof*`
## Installation error codes
Perform the following steps to correct the issue.
wget https://github.com/Microsoft/OMS-Agent-for-Linux/releases/download/OMSAgent_GA_v1.4.2-124/omsagent-1.4.2-124.universal.x64.sh ```
-3. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
+3. Upgrade packages by executing `sudo sh ./omsagent-*.universal.x64.sh --upgrade`.
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/om-agents.md
To insure the security of data in transit to Azure Monitor, we strongly encourag
Perform the following series of steps to configure your Operations Manager management group to connect to one of your Log Analytics workspaces.
+> [!NOTE]
+> If you notice that Log Analytics data stops coming in from a specific agent or management server, you can try resetting the Winsock Catalog (use `netsh winsock reset`), and then reboot the server. Resetting the Winsock Catalog allows network connections that were broken to be reestablished.
++ During initial registration of your Operations Manager management group with a Log Analytics workspace, the option to specify the proxy configuration for the management group is not available in the Operations console. The management group has to be successfully registered with the service before this option is available. To work around this, you need to update the system proxy configuration using Netsh on the system your running the Operations console from to configure integration, and all management servers in the management group. 1. Open an elevated command-prompt.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
If you have configured your Storage Account to allow access from selected networ
### Create or update data export rule A data export rule defines the tables for which data is exported and the destination. You can create a single rule for each destination currently.
-If you need a list of tables in your workapce for export rules configuration, run this query in your workspace.
+Export rule should include tables that you have in your workspace. Run this query for a list of available tables in your workspace.
```kusto find where TimeGenerated > ago(24h) | distinct Type
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-enable-overview.md
VM insights supports any operating system that supports the Log Analytics agent
> [!IMPORTANT] > The VM insights guest health feature has more limited operating system support while it's in public preview. See [Enable VM insights guest health (preview)](../vm/vminsights-health-enable.md) for a detailed list.
+### Linux considerations
See the following list of considerations on Linux support of the Dependency agent that supports VM insights: - Only default and SMP Linux kernel releases are supported.
See the following list of considerations on Linux support of the Dependency agen
- Custom kernels, including recompilations of standard kernels, aren't supported. - For Debian distros other than version 9.4, the map feature isn't supported, and the Performance feature is available only from the Azure Monitor menu. It isn't available directly from the left pane of the Azure VM. - CentOSPlus kernel is supported.-- The Linux kernel must be patched for the Spectre vulnerability. Please consult your Linux distribution vendor for more details.+
+The Linux kernel must be patched for the Spectre and Meltdown vulnerabilities. Please consult your Linux distribution vendor for more details. Run the following command to check for available if Spectre/Meltdown has been mitigated:
+
+```
+$ grep . /sys/devices/system/cpu/vulnerabilities/*
+```
+
+Output for this command will look similar to the following and specify whether a machine is vulnerable to either issue. If these files are missing, the machine is unpatched.
+
+```
+/sys/devices/system/cpu/vulnerabilities/meltdown:Mitigation: PTI
+/sys/devices/system/cpu/vulnerabilities/spectre_v1:Vulnerable
+/sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable: Minimal generic ASM retpoline
+```
++ ## Log Analytics workspace VM insights requires a Log Analytics workspace. See [Configure Log Analytics workspace for VM insights](vminsights-configure-workspace.md) for details and requirements of this workspace. ## Agents
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-percept-security.md
Azure Percept devices use the hardware root trust to secure firmware. The boot R
### IoT Edge
-Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](https://docs.microsoft.com/azure/iot-edge/iot-edge-security-manager?view=iotedge-2018-06).
+Azure Percept DK connects to Azure Percept Studio with additional security and other Azure services utilizing Transport Layer Security (TLS) protocol. Azure Percept DK is an Azure IoT Edge enabled device. IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results. Azure Percept DK utilizes Docker containers for isolating IoT Edge workloads from the host operating system and edge enabled applications. For more information about the Azure IoT Edge security framework, read about the [IoT Edge security manager](https://docs.microsoft.com/azure/iot-edge/iot-edge-security-manager).
### Device Update for IoT Hub
This checklist is a starting point for firewall rules:
|*.auth.azureperceptdk.azure.net| 443| Azure DK SOM Authentication and Authorization| |*.auth.projectsantacruz.azure.net| 443| Azure DK SOM Authentication and Authorization|
-Additionally, review the list of [connections used by Azure IoT Edge](https://docs.microsoft.com/azure/iot-edge/production-checklist?view=iotedge-2018-06#allow-connections-from-iot-edge-devices).
+Additionally, review the list of [connections used by Azure IoT Edge](https://docs.microsoft.com/azure/iot-edge/production-checklist#allow-connections-from-iot-edge-devices).
<! ## Additional Recommendations for Deployment to Production
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Title: Troubleshoot issues with Azure Percept Audio and speech modules
-description: Get troubleshooting tips for some of the more common issues found during the on-boarding experience
+ Title: Troubleshoot issues with Azure Percept Audio and the speech module
+description: Get troubleshooting tips for Azure Percept Audio and azureearspeechclientmodule
Use the guidelines below to troubleshoot voice assistant application issues.
To run these commands, [connect to the Azure Percept DK Wi-Fi access point and connect to the dev kit over SSH](./how-to-ssh-into-percept-dk.md) and enter the commands in the SSH terminal. ```console
- iotedge logs azureearspeechclientmodule
+sudo iotedge logs azureearspeechclientmodule
``` To redirect any output to a .txt file for further analysis, use the following syntax: ```console
-[command] > [file name].txt
+sudo [command] > [file name].txt
``` After redirecting output to a .txt file, copy the file to your host PC via SCP:
If the runtime status of **azureearspeechclientmodule** is not listed as **runni
You can use LED indicators to understand which state you device is in. Usually it takes around 2 minutes for the module to fully initialize after *power on*. As it goes through initialization steps you will see:
-1. 1 center white LED - the device is powered on.
-2. 1 center white LED blinking - authentication is in progress.
+1. 1 center white LED - the device is powered on.
+2. 1 center white LED blinking - authentication is in progress.
3. All three LEDs will change to blue once the device is authenticated and ready to use.
-|LED| LED State| Ear SoM Status|
-|||-|
-|L02| 1x white, static on |Power on |
-|L02| 1x white, 0.5 Hz flashing| Authentication in progress |
-|L01 & L02 & L03| 3x blue, static on| Waiting for keyword|
-|L01 & L02 & L03| LED array flashing, 20fps | Listening or speaking|
-|L01 & L02 & L03| LED array racing, 20fps| Thinking|
-|L01 & L02 & L03| 3x red, static on | Mute|
+|LED|LED State|Ear SoM Status|
+|||--|
+|L02|1x white, static on|Power on |
+|L02|1x white, 0.5 Hz flashing|Authentication in progress |
+|L01 & L02 & L03|3x blue, static on|Waiting for keyword|
+|L01 & L02 & L03|LED array flashing, 20fps |Listening or speaking|
+|L01 & L02 & L03|LED array racing, 20fps|Thinking|
+|L01 & L02 & L03|3x red, static on |Mute|
## Next steps
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
Title: Troubleshoot general issues with Azure Percept DK and IoT Edge
-description: Get troubleshooting tips for some of the more common issues found during the on-boarding experience
+description: Get troubleshooting tips for some of the more common issues with Azure Percept DK
To run these commands,
To redirect any output to a .txt file for further analysis, use the following syntax: ```console
-[command] > [file name].txt
+sudo [command] > [file name].txt
``` After redirecting output to a .txt file, copy the file to your host PC via SCP:
For additional information on the Azure IoT Edge commands, see the [Azure IoT Ed
|OS |```cat /etc/os-subrelease``` |check derivative image version | |OS |```cat /etc/adu-version``` |check ADU version | |Temperature |```cat /sys/class/thermal/thermal_zone0/temp``` |check temperature of devkit |
-|Wi-Fi |```journalctl -u hostapd.service``` |check SoftAP logs|
-|Wi-Fi |```journalctl -u wpa_supplicant.service``` |check Wi-Fi services logs |
-|Wi-Fi |```journalctl -u ztpd.service``` |check Wi-Fi Zero Touch Provisioning Service logs |
-|Wi-Fi |```journalctl -u systemd-networkd``` |check Mariner Network stack logs |
-|Wi-Fi |```/data/misc/wifi/hostapd_virtual.conf``` |check wifi access point configuration details |
-|OOBE |```journalctl -u oobe -b``` |check OOBE logs |
-|Telemetry |```azure-device-health-id``` |find unique telemetry HW_ID |
+|Wi-Fi |```sudo journalctl -u hostapd.service``` |check SoftAP logs|
+|Wi-Fi |```sudo journalctl -u wpa_supplicant.service``` |check Wi-Fi services logs |
+|Wi-Fi |```sudo journalctl -u ztpd.service``` |check Wi-Fi Zero Touch Provisioning Service logs |
+|Wi-Fi |```sudo journalctl -u systemd-networkd``` |check Mariner Network stack logs |
+|Wi-Fi |```sudo cat /etc/hostapd/hostapd-wlan1.conf``` |check wifi access point configuration details |
+|OOBE |```sudo journalctl -u oobe -b``` |check OOBE logs |
+|Telemetry |```audo azure-device-health-id``` |find unique telemetry HW_ID |
|Azure IoT Edge |```sudo iotedge check``` |run configuration and connectivity checks for common issues | |Azure IoT Edge |```sudo iotedge logs [container name]``` |check container logs, such as speech and vision modules | |Azure IoT Edge |```sudo iotedge support-bundle --since 1h``` |collect module logs, Azure IoT Edge security manager logs, container engine logs, ```iotedge check``` JSON output, and other useful debug information from the past hour |
For additional information on the Azure IoT Edge commands, see the [Azure IoT Ed
|Azure IoT Edge |```sudo systemctl restart iotedge``` |restart the Azure IoT Edge Security Daemon | |Azure IoT Edge |```sudo iotedge list``` |list the deployed Azure IoT Edge modules | |Other |```df [option] [file]``` |display information on available/total space in specified file system(s) |
-|Other |```ip route get 1.1.1.1``` |display device IP and interface information |
-|Other |```ip route get 1.1.1.1 \| awk '{print $7}'``` <br> ```ifconfig [interface]``` |display device IP address only |
+|Other |`ip route get 1.1.1.1` |display device IP and interface information |
+|Other |<code>ip route get 1.1.1.1 &#124; awk '{print $7}'</code> <br> `ifconfig [interface]` |display device IP address only |
The ```journalctl``` Wi-Fi commands can be combined into the following single command: ```console
-journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u systemd-networkd -b
+sudo journalctl -u hostapd.service -u wpa_supplicant.service -u ztpd.service -u systemd-networkd -b
``` ## Docker troubleshooting commands |Command: |Function: | |--||
-|```docker ps``` |[shows which containers are running](https://docs.docker.com/engine/reference/commandline/ps/) |
-|```docker images``` |[shows which images are on the device](https://docs.docker.com/engine/reference/commandline/images/)|
-|```docker rmi [image id] -f``` |[deletes an image from the device](https://docs.docker.com/engine/reference/commandline/rmi/) |
-|```docker logs -f edgeAgent``` <br> ```docker logs -f [module_name]``` |[takes container logs of specified module](https://docs.docker.com/engine/reference/commandline/logs/) |
-|```docker image prune``` |[removes all dangling images](https://docs.docker.com/engine/reference/commandline/image_prune/) |
-|```watch docker ps``` <br> ```watch ifconfig [interface]``` |check docker container download status |
+|```sudo docker ps``` |[shows which containers are running](https://docs.docker.com/engine/reference/commandline/ps/) |
+|```sudo docker images``` |[shows which images are on the device](https://docs.docker.com/engine/reference/commandline/images/)|
+|```sudo docker rmi [image id] -f``` |[deletes an image from the device](https://docs.docker.com/engine/reference/commandline/rmi/) |
+|```sudo docker logs -f edgeAgent``` <br> ```sudo docker logs -f [module_name]``` |[takes container logs of specified module](https://docs.docker.com/engine/reference/commandline/logs/) |
+|```sudo docker image prune``` |[removes all dangling images](https://docs.docker.com/engine/reference/commandline/image_prune/) |
+|```sudo watch docker ps``` <br> ```watch ifconfig [interface]``` |check docker container download status |
## USB Updating
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
Title: Lock resources to prevent changes description: Prevent users from updating or deleting Azure resources by applying a lock for all users and roles. Previously updated : 02/01/2021 Last updated : 03/09/2021
Applying locks can lead to unexpected results because some operations that don't
* A read-only lock on a **storage account** prevents users from listing the account keys. The Azure Storage [List Keys](/rest/api/storagerp/storageaccounts/listkeys) operation is handled through a POST request to protect access to the account keys, which provide complete access to data in the storage account. When a read-only lock is configured for a storage account, users who do not possess the account keys must use Azure AD credentials to access blob or queue data. A read-only lock also prevents the assignment of Azure RBAC roles that are scoped to the storage account or to a data container (blob container or queue).
+* A cannot-delete lock on a **storage account** does not prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted, and does not protect blob, queue, table, or file data within that storage account.
+
+* A read-only lock on a **storage account** does not prevent data within that account from being deleted or modified. This type of lock only protects the storage account itself from being deleted or modified, and does not protect blob, queue, table, or file data within that storage account.
+ * A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access. * A read-only lock on a **resource group** that contains a **virtual machine** prevents all users from starting or restarting the virtual machine. These operations require a POST request.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-resource-manager Bicep Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-install.md
+
+ Title: Setup Bicep development and deployment environments
+description: How to configure Bicep development and deployment environments
+ Last updated : 03/09/2021++
+# Setup Bicep development and deployment environment
+
+Learn how to setup Bicep development and deployment environments.
+
+## Development environment
+
+To get the best Bicep authoring experience, you need two components:
+
+- **Bicep extension for Visual Studio Code**. To create Bicep files, you need a good Bicep editor. We recommend [Visual Studio Code](https://code.visualstudio.com/) with the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). These tools provide language support and resource autocompletion. They help create and validate Bicep files. For more information, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
+- **Bicep CLI**. Use Bicep CLI to compile Bicep files to ARM JSON templates, and decompile ARM JSON templates to Bicep files. For more information, see [Install Bicep CLI](#install-bicep-cli).
+
+## Deployment environment
+
+You can deploy Bicep files by using Azure CLI or Azure PowerShell. For Azure CLI, you need version 2.20.0 or later; for Azure PowerShell, you need version 5.6.0 or later. For the installation instructions, see:
+
+- [Install Azure PowerShell](/powershell/azure/install-az-ps)
+- [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows)
+- [Install Azure CLI on Linux](/cli/azure/install-azure-cli-linux)
+- [Install Azure CLI on macOS](/cli/azure/install-azure-cli-macos)
+
+> [!NOTE]
+> Currently, both Azure CLI and Azure PowerShell can only deploy local Bicep files. For more information about deploying Bicep files by using Azure CLI, see [Deploy - CLI](/deploy-cli.md#deploy-remote-template). For more information about deploying Bicep files by using Azure PowerShell, see [Deploy - PowerShell](/deploy-powershell.md#deploy-remote-template).
+
+After the supported version of Azure PowerShell or Azure CLI is installed, you can deploy a Bicep file with:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroupDeployment `
+ -Name ExampleDeployment `
+ -ResourceGroupName ExampleGroup `
+ -TemplateFile <path-to-template-or-bicep> `
+ -storageAccountType Standard_GRS
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az deployment group create \
+ --name ExampleDeployment \
+ --resource-group ExampleGroup \
+ --template-file <path-to-template-or-bicep> \
+ --parameters storageAccountType=Standard_GRS
+```
+++
+## Install Bicep CLI
+
+You can install Bicep CLI by using Azure CLI, by using Azure PowerShell or manually.
+
+### Use Azure CLI
+
+With Az CLI version 2.20.0 or later installed, the Bicep CLI is automatically installed when a command that depends on it is executed. For example, `az deployment ... -f *.bicep` or `az bicep ...`.
+
+You can also manually install the CLI using the built-in commands:
+
+```bash
+az bicep install
+```
+
+To upgrade to the latest version:
+
+```bash
+az bicep upgrade
+```
+
+To install a specific version:
+
+```bash
+az bicep install --version v0.2.212
+```
+
+> [!NOTE]
+> Az CLI installs a separate version of the Bicep CLI that is not in conflict with any other Bicep installs you may have, and Az CLI does not add Bicep to your PATH.
+
+To show the installed versions:
+
+```bash
+az bicep version
+```
+
+To list all available versions of Bicep CLI:
+
+```bash
+az bicep list-versions
+```
+
+### Use Azure PowerShell
+
+Azure PowerShell does not have the capability to install the Bicep CLI yet. Azure PowerShell (v5.6.0 or later) expects that the Bicep CLI is already installed and available on the PATH. Follow one of the [manual install methods](#install-manually). Once the Bicep CLI is installed, Bicep CLI is called whenever it is required for a deployment cmdlet. For example, `New-AzResourceGroupDeployment ... -TemplateFile main.bicep`.
+
+### Install manually
+
+The following methods install the Bicep CLI and add it to your PATH.
+
+#### Linux
+
+```sh
+# Fetch the latest Bicep CLI binary
+curl -Lo bicep https://github.com/Azure/bicep/releases/latest/download/bicep-linux-x64
+# Mark it as executable
+chmod +x ./bicep
+# Add bicep to your PATH (requires admin)
+sudo mv ./bicep /usr/local/bin/bicep
+# Verify you can now access the 'bicep' command
+bicep --help
+# Done!
+
+```
+
+#### macOS
+
+##### via homebrew
+
+```sh
+# Add the tap for bicep
+brew tap azure/bicep https://github.com/azure/bicep
+
+# Install the tool
+brew install azure/bicep/bicep
+```
+
+##### macOS manual install
+
+```sh
+# Fetch the latest Bicep CLI binary
+curl -Lo bicep https://github.com/Azure/bicep/releases/latest/download/bicep-osx-x64
+# Mark it as executable
+chmod +x ./bicep
+# Add Gatekeeper exception (requires admin)
+sudo spctl --add ./bicep
+# Add bicep to your PATH (requires admin)
+sudo mv ./bicep /usr/local/bin/bicep
+# Verify you can now access the 'bicep' command
+bicep --help
+# Done!
+
+```
+
+#### Windows
+
+##### Windows Installer
+
+Download and run the [latest Windows installer](https://github.com/Azure/bicep/releases/latest/download/bicep-setup-win-x64.exe). The installer does not require administrative privileges. After the installation, Bicep CLI is added to your user PATH. Close and reopen any opened command shell windows for the PATH change to take effect.
+
+##### Chocolatey
+
+```powershell
+choco install bicep
+```
+
+##### Winget
+
+```powershell
+winget install -e --id Microsoft.Bicep
+```
+
+##### Manual with PowerShell
+
+```powershell
+# Create the install folder
+$installPath = "$env:USERPROFILE\.bicep"
+$installDir = New-Item -ItemType Directory -Path $installPath -Force
+$installDir.Attributes += 'Hidden'
+# Fetch the latest Bicep CLI binary
+(New-Object Net.WebClient).DownloadFile("https://github.com/Azure/bicep/releases/latest/download/bicep-win-x64.exe", "$installPath\bicep.exe")
+# Add bicep to your PATH
+$currentPath = (Get-Item -path "HKCU:\Environment" ).GetValue('Path', '', 'DoNotExpandEnvironmentNames')
+if (-not $currentPath.Contains("%USERPROFILE%\.bicep")) { setx PATH ($currentPath + ";%USERPROFILE%\.bicep") }
+if (-not $env:path.Contains($installPath)) { $env:path += ";$installPath" }
+# Verify you can now access the 'bicep' command.
+bicep --help
+# Done!
+```
+
+## Install the nightly builds
+
+If you'd like to try the latest pre-release bits of Bicep before they are released, see [Install nightly builds](https://github.com/Azure/bicep/blob/main/docs/installing-nightly.md).
+
+> [!WARNING]
+> These pre-release builds are much more likely to have known or unknown bugs.
+
+## Next steps
+
+Get started with the [Bicep quickstart](./quickstart-create-bicep-use-visual-studio-code.md).
azure-resource-manager Bicep Tutorial Add Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-functions.md
Title: Tutorial - add functions to Azure Resource Manager Bicep files description: Add functions to your Bicep files to construct values. Previously updated : 03/02/2021 Last updated : 03/10/2021
The location of the storage account is hard-coded to **East US**. However, you m
Functions add flexibility to your Bicep file by dynamically getting values during deployment. In this tutorial, you use a function to get the location of the resource group you're using for deployment.
-The following example highlights the changes to add a parameter called `location`. The parameter default value calls the [resourceGroup](template-functions-resource.md#resourcegroup) function. This function returns an object with information about the resource group being used for deployment. One of the properties on the object is a location property. When you use the default value, the storage account location has the same location as the resource group. The resources inside a resource group don't have to share the same location. You can also provide a different location when needed.
+The following example shows the changes to add a parameter called `location`. The parameter default value calls the [resourceGroup](template-functions-resource.md#resourcegroup) function. This function returns an object with information about the resource group being used for deployment. One of the properties on the object is a location property. When you use the default value, the storage account location has the same location as the resource group. The resources inside a resource group don't have to share the same location. You can also provide a different location when needed.
Copy the whole file and replace your Bicep file with its contents.
azure-resource-manager Bicep Tutorial Add Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-modules.md
Title: Tutorial - add modules to Azure Resource Manager Bicep file description: Use modules to encapsulate complex details of the raw resource declaration. Previously updated : 03/01/2021 Last updated : 03/10/2021
azure-resource-manager Bicep Tutorial Add Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-outputs.md
Title: Tutorial - add outputs to Azure Resource Manager Bicep file description: Add outputs to your Bicep file to simplify the syntax. Previously updated : 03/01/2021 Last updated : 03/10/2021
It deploys a storage account, but it doesn't return any information about the st
You can use outputs to return values from the deployment. For example, it might be helpful to get the endpoints for your new storage account.
-The following example highlights the change to your Bicep file to add an output value. Copy the whole file and replace your Bicep file with its contents.
+The following example shows the change to your Bicep file to add an output value. Copy the whole file and replace your Bicep file with its contents.
:::code language="bicep" source="~/resourcemanager-templates/get-started-with-templates/add-outputs/azuredeploy.bicep" range="1-33" highlight="33":::
There are some important items to note about the output value you added.
The type of returned value is set to `object`, which means it returns a template object.
-To get the `primaryEndpoints` property from the storage account, you use the storage account symbolic name.
+To get the `primaryEndpoints` property from the storage account, you use the storage account symbolic name. The autocomplete feature of the Visual Studio Code presents you a full list of the properties:
+
+ ![Visual Studio Code Bicep symbolic name object properties](./media/bicep-tutorial-add-outputs/visual-studio-code-bicep-output-properties.png)
## Deploy Bicep file
azure-resource-manager Bicep Tutorial Add Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-parameters.md
Title: Tutorial - add parameters to Azure Resource Manager Bicep file description: Add parameters to your Bicep file to make it reusable. Previously updated : 03/01/2021 Last updated : 03/10/2021
You may have noticed that there's a problem with this Bicep file. The storage ac
## Make Bicep file reusable
-To make your Bicep file reusable, let's add a parameter that you can use to pass in a storage account name. The highlighted Bicep in the following example shows what changed in your file. The `storageName` parameter is identified as a string. The maximum length is set to 24 characters to prevent any names that are too long.
+To make your Bicep file reusable, let's add a parameter that you can use to pass in a storage account name. The following Bicep file shows what changed in your file. The `storageName` parameter is identified as a string. The maximum length is set to 24 characters to prevent any names that are too long.
Copy the whole file and replace it with the following contents.
azure-resource-manager Bicep Tutorial Add Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-tags.md
Title: Tutorial - add tags to resources in Azure Resource Manager Bicep file description: Add tags to resources that you deploy in your Bicep files. Tags let you logically organize resources. Previously updated : 03/01/2021 Last updated : 03/10/2021
After deploying these resources, you might need to track costs and find resource
You tag resources to add values that help you identify their use. For example, you can add tags that list the environment and the project. You could add tags that identify a cost center or the team that owns the resource. Add any values that make sense for your organization.
-The following example highlights the changes to the Bicep file. Copy the whole file and replace your Bicep file with its contents.
+The following example shows the changes to the Bicep file. Copy the whole file and replace your Bicep file with its contents.
:::code language="bicep" source="~/resourcemanager-templates/get-started-with-templates/add-tags/azuredeploy.bicep" range="1-81" highlight="27-30,38,51,71":::
azure-resource-manager Bicep Tutorial Add Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-add-variables.md
Title: Tutorial - add variable to Azure Resource Manager Bicep file description: Add variables to your Bicep file to simplify the syntax. Previously updated : 03/01/2021 Last updated : 03/10/2021
The parameter for the storage account name is hard-to-use because you have to pr
## Use variable
-The following example highlights the changes to add a variable to your Bicep file that creates a unique storage account name. Copy the whole file and replace your Bicep file with its contents.
+The following example shows the changes to add a variable to your Bicep file that creates a unique storage account name. Copy the whole file and replace your Bicep file with its contents.
:::code language="bicep" source="~/resourcemanager-templates/get-started-with-templates/add-variable/azuredeploy.bicep" range="1-31" highlight="1-3,19,22":::
azure-resource-manager Bicep Tutorial Create First Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-create-first-bicep.md
Title: Tutorial - Create & deploy Azure Resource Manager Bicep files description: Create your first Bicep file for deploying Azure resources. In the tutorial, you learn about the Bicep file syntax and how to deploy a storage account. Previously updated : 03/03/2021 Last updated : 03/10/2021
Okay, you're ready to start learning about Bicep.
The resource declaration has four components: - **resource**: keyword.
- - **symbolic name** (stg): A symbolic name is an identifier for referencing the resource throughout your bicep file. It is not what the name of the resource will be when it's deployed. The name of the resource is defined by the **name** property. See the fourth component in this list. To make the tutorials easy to follow, **stg** is used as the symbolic name for the storage account resource in this tutorial series.
+ - **symbolic name** (stg): A symbolic name is an identifier for referencing the resource throughout your bicep file. It is not what the name of the resource will be when it's deployed. The name of the resource is defined by the **name** property. See the fourth component in this list. To make the tutorials easy to follow, **stg** is used as the symbolic name for the storage account resource in this tutorial series. To see how to use the symbolic name to get a full list of the object properties, see [Add outputs](./bicep-tutorial-add-outputs.md).
- **resource type** (Microsoft.Storage/storageAccounts@2019-06-01): It is composed of the resource provider (Microsoft.Storage), resource type (storageAccounts), and apiVersion (2019-06-01). Each resource provider publishes its own API versions, so this value is specific to the type. You can find more types and apiVersions for various Azure resources from [ARM template reference](/azure/templates/). - **properties** (everything inside = {...}): These are the specific properties you would like to specify for the given resource type. These are exactly the same properties available to you in an ARM Template. Every resource has a `name` property. Most resources also have a `location` property, which sets the region where the resource is deployed. The other properties vary by resource type and API version. It's important to understand the connection between the API version and the available properties, so let's jump into more detail.
azure-resource-manager Bicep Tutorial Export Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-export-template.md
Title: Tutorial - Export JSON template from the Azure portal for Bicep development description: Learn how to use an exported JSON template to complete your Bicep development. Previously updated : 03/01/2021 Last updated : 03/10/2021
Currently, the Azure portal only supports exporting JSON templates. There are to
The decomplied exported template gives you most of the Bicep you need, but you need to customize it for your Bicep file. Pay particular attention to differences in parameters and variables between your Bicep file and the exported Bicep file. Obviously, the export process doesn't know the parameters and variables that you've already defined in your Bicep file.
-The following example highlights the additions to your Bicep file. It contains the exported code plus some changes. First, it changes the name of the parameter to match your naming convention. Second, it uses your location parameter for the location of the app service plan. Third, it removes some of the properties where the default value is fine.
+The following example shows the additions to your Bicep file. It contains the exported code plus some changes. First, it changes the name of the parameter to match your naming convention. Second, it uses your location parameter for the location of the app service plan. Third, it removes some of the properties where the default value is fine.
Copy the whole file and replace your Bicep file with its contents.
azure-resource-manager Bicep Tutorial Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-quickstart-template.md
Title: Tutorial - Use quickstart templates for Azure Resource Manager Bicep development description: Learn how to use Azure Quickstart templates to complete your Bicep development. Previously updated : 03/01/2021 Last updated : 03/10/2021
azure-resource-manager Bicep Tutorial Use Parameter File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-tutorial-use-parameter-file.md
Title: Tutorial - use parameter file to deploy Azure Resource Manager Bicep file description: Use parameter files that contain the values to use for deploying your Bicep file. Previously updated : 03/01/2021 Last updated : 03/10/2021
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-sql Authentication Aad Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-service-principal.md
Supporting this functionality is useful in Azure AD application automation proce
To enable an Azure AD object creation in SQL Database and Azure Synapse on behalf of an Azure AD application, the following settings are required:
-1. Assign the server identity. The assigned server identity represents the Managed System Identity (MSI). Currently, the server identity for Azure SQL does not support User Managed Identity (UMI).
+1. Assign the server identity. The assigned server identity represents the Managed Service Identity (MSI). Currently, the server identity for Azure SQL does not support User Managed Identity (UMI).
- For a new Azure SQL logical server, execute the following PowerShell command: ```powershell
To enable an Azure AD object creation in SQL Database and Azure Synapse on behal
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Create Azure AD users using Azure AD applications](authentication-aad-service-principal-tutorial.md)
+> [Tutorial: Create Azure AD users using Azure AD applications](authentication-aad-service-principal-tutorial.md)
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
Previously updated : 11/18/2020 Last updated : 03/10/2021 # Automated backups - Azure SQL Database & SQL Managed Instance
For both SQL Database and SQL Managed Instance, you can configure full backup lo
For more information about LTR, see [Long-term backup retention](long-term-retention-overview.md).
-## Storage costs
+## Backup storage costs
The price for backup storage varies and depends on your purchasing model (DTU or vCore), chosen backup storage redundancy option, and also on your region. The backup storage is charged per GB/month consumed, for pricing see [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database/single/) page and [Azure SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/) page.
+> [!NOTE]
+> Azure invoice will show only the excess backup storage consumed, not the entire backup storage consumption. For example, in a hypothetical scenario, if you have provisioned 4TB of data storage, you will get 4TB of free backup storage space. In case that you have used the total of 5.8TB of backup storage space, Azure invoice will show only 1.8TB, as only excess backup storage used is charged.
+ ### DTU model In the DTU model, there's no additional charge for backup storage for databases and elastic pools. The price of backup storage is a part of database or pool price.
A full list of built-in policy definitions for SQL Database and Managed Instance
To enforce data residency requirements at an organizational level, these policies can be assigned to a subscription. After these are assigned at a subscription level, users in the given subscription will not be able to create a database or a managed instance with geo-redundant backup storage via Azure portal or Azure PowerShell. > [!IMPORTANT]
-> Azure policies are not enforced when creating a database via T-SQL. To enforce data residency when creating a database using T-SQL, [use 'LOCAL' or 'ZONE' as input to BACKUP_STORAGE_REDUNDANCY paramater in CREATE DATABASE statement](/sql/t-sql/statements/create-database-transact-sql?view=azuresqldb-current#create-database-using-zone-redundancy-for-backups).
+> Azure policies are not enforced when creating a database via T-SQL. To enforce data residency when creating a database using T-SQL, [use 'LOCAL' or 'ZONE' as input to BACKUP_STORAGE_REDUNDANCY paramater in CREATE DATABASE statement](/sql/t-sql/statements/create-database-transact-sql#create-database-using-zone-redundancy-for-backups).
Learn how to assign policies using the [Azure portal](../../governance/policy/assign-policy-portal.md) or [Azure PowerShell](../../governance/policy/assign-policy-powershell.md)
Learn how to assign policies using the [Azure portal](../../governance/policy/as
- Get more information about how to [restore a database to a point in time by using PowerShell](scripts/restore-database-powershell.md). - For information about how to configure, manage, and restore from long-term retention of automated backups in Azure Blob storage by using the Azure portal, see [Manage long-term backup retention by using the Azure portal](long-term-backup-retention-configure.md). - For information about how to configure, manage, and restore from long-term retention of automated backups in Azure Blob storage by using PowerShell, see [Manage long-term backup retention by using PowerShell](long-term-backup-retention-configure.md).
+- To learn all about backup storage consumption on Azure SQL Managed Instance, see [Backup storage consumption on Managed Instance explained](https://aka.ms/mi-backup-explained).
- To learn how to fine-tune backup storage retention and costs for Azure SQL Managed Instance, see [Fine tuning backup storage costs on Managed Instance](https://aka.ms/mi-backup-tuning).
azure-sql Database Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-export.md
$exportStatus
## Cancel the export request Use the [Database Operations - Cancel API](https://docs.microsoft.com/rest/api/sql/databaseoperations/cancel)
-or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity?view=azps-5.5.0), here an example of powershell command.
+or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
```cmd Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
az sql db import --resource-group "<resourceGroup>" --server "<server>" --name "
## Cancel the import request Use the [Database Operations - Cancel API](https://docs.microsoft.com/rest/api/sql/databaseoperations/cancel)
-or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity?view=azps-5.5.0), here an example of powershell command.
+or the Powershell [Stop-AzSqlDatabaseActivity command](https://docs.microsoft.com/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
```cmd Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
ms.devlang: Previously updated : 06/17/2020 Last updated : 03/10/2021 # What's new in Azure SQL Database & SQL Managed Instance?
This table provides a quick comparison for the change in terminology:
| Feature | Details | | | |
-| Accelerated database recovery with single databases and elastic pools | For information, see [Accelerated Database Recovery](../accelerated-database-recovery.md).|
-| Data discovery & classification |For information, see [Azure SQL Database and Azure Synapse Analytics data discovery & classification](data-discovery-and-classification-overview.md).|
| Elastic database jobs (preview) | For information, see [Create, configure, and manage elastic jobs](elastic-jobs-overview.md). | | Elastic queries | For information, see [Elastic query overview](elastic-query-overview.md). | | Elastic transactions | [Distributed transactions across cloud databases](elastic-transactions-overview.md). | | Query editor in the Azure portal |For information, see [Use the Azure portal's SQL query editor to connect and query data](connect-query-portal.md).|
-| R services / machine learning with single databases and elastic pools |For information, see [Machine Learning Services in Azure SQL Database](/sql/advanced-analytics/what-s-new-in-sql-server-machine-learning-services?view=sql-server-2017#machine-learning-services-in-azure-sql-database).|
|SQL Analytics|For information, see [Azure SQL Analytics](../../azure-monitor/insights/azure-sql.md).| | &nbsp; |
This table provides a quick comparison for the change in terminology:
| | | | <a href="/azure/azure-sql/database/elastic-transactions-overview">Distributed transactions</a> | Distributed transactions across Managed Instances. | | <a href="/azure/sql-database/sql-database-instance-pools">Instance pools</a> | A convenient and cost-efficient way to migrate smaller SQL instances to the cloud. |
-| <a href="/en-gb/sql/t-sql/statements/create-login-transact-sql">Instance-level Azure AD server principals (logins)</a> | Create instance-level logins using a <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current">CREATE LOGIN FROM EXTERNAL PROVIDER</a> statement. |
+| <a href="/en-gb/sql/t-sql/statements/create-login-transact-sql">Instance-level Azure AD server principals (logins)</a> | Create instance-level logins using a <a href="/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-mi-current&preserve-view=true">CREATE LOGIN FROM EXTERNAL PROVIDER</a> statement. |
| [Transactional Replication](../managed-instance/replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](../managed-instance/replication-between-two-instances-configure-tutorial.md). | | Threat detection |For information, see [Configure threat detection in Azure SQL Managed Instance](../managed-instance/threat-detection-configure.md).| | Long-term backup retention | For information, see [Configure long-term back up retention in Azure SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md), which is currently in limited public preview. |
The following features are enabled in the SQL Managed Instance deployment model
|[Procedure sp_send_dbmail may transiently fail when @query parameter is used](#procedure-sp_send_dbmail-may-transiently-fail-when--parameter-is-used)|Jan 2021|Has Workaround|| |[Distributed transactions can be executed after removing Managed Instance from Server Trust Group](#distributed-transactions-can-be-executed-after-removing-managed-instance-from-server-trust-group)|Oct 2020|Has Workaround|| |[Distributed transactions cannot be executed after Managed Instance scaling operation](#distributed-transactions-cannot-be-executed-after-managed-instance-scaling-operation)|Oct 2020|Has Workaround||
-|[BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql)/[OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver15) in Azure SQL and `BACKUP`/`RESTORE` statement in Managed Instance cannot use Azure AD Manage Identity to authenticate to Azure storage|Sep 2020|Has Workaround||
+|[BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql)/[OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql) in Azure SQL and `BACKUP`/`RESTORE` statement in Managed Instance cannot use Azure AD Manage Identity to authenticate to Azure storage|Sep 2020|Has Workaround||
|[Service Principal cannot access Azure AD and AKV](#service-principal-cannot-access-azure-ad-and-akv)|Aug 2020|Has Workaround|| |[Restoring manual backup without CHECKSUM might fail](#restoring-manual-backup-without-checksum-might-fail)|May 2020|Resolved|June 2020| |[Agent becomes unresponsive upon modifying, disabling, or enabling existing jobs](#agent-becomes-unresponsive-upon-modifying-disabling-or-enabling-existing-jobs)|May 2020|Resolved|June 2020|
GO
BULK INSERT Sales.Invoices FROM 'inv-2017-12-08.csv' WITH (DATA_SOURCE = 'MyAzureBlobStorage'); ```
-**Workaround**: Use [Shared Access Signature to authenticate to storage](/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15#f-importing-data-from-a-file-in-azure-blob-storage).
+**Workaround**: Use [Shared Access Signature to authenticate to storage](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage).
### Service Principal cannot access Azure AD and AKV
azure-sql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
azure-sql Pricing Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/pricing-guidance.md
To create an Azure VM running SQL Server 2017 with one of these pay-as-you-go im
## <a id="byol"></a> Bring your own license (BYOL)
-**Bringing your own SQL Server license through License Mobility**, also referred to as **BYOL**, means using an existing SQL Server Volume License with Software Assurance in an Azure VM. A SQL Server VM using BYOL only charges for the cost of running the VM, not for SQL Server licensing, given that you have already acquired licenses and Software Assurance through a Volume Licensing program.
-
-> [!IMPORTANT]
-> BYOL images require an Enterprise Agreement with Software Assurance. They are not available as a part of the Azure Cloud Solution Partner (CSP) at this time. CSP customers can bring their own license by deploying a pay-as-you-go image and then enabling the [Azure Hybrid Benefit](licensing-model-azure-hybrid-benefit-ahb-change.md).
+**Bringing your own SQL Server license through License Mobility**, also referred to as **BYOL**, means using an existing SQL Server Volume License with Software Assurance in an Azure VM. A SQL Server VM using BYOL only charges for the cost of running the VM, not for SQL Server licensing, given that you have already acquired licenses and Software Assurance through a Volume Licensing program or through a Cloud Solution Partner (CSP).
> [!NOTE] > The BYOL images are currently only available for Windows virtual machines. However, you can manually install SQL Server on a Linux-only VM. See the guidelines in the [SQL Server on a Linux VM FAQ](../linux/frequently-asked-questions-faq.md).
For general Azure pricing guidance, see [Prevent unexpected costs with Azure bil
For an overview of SQL Server on Azure Virtual Machines, see the following articles: - [Overview of SQL Server on Windows VMs](sql-server-on-azure-vm-iaas-what-is-overview.md)-- [Overview of SQL Server on Linux VMs](../linux/sql-server-on-linux-vm-what-is-iaas-overview.md)
+- [Overview of SQL Server on Linux VMs](../linux/sql-server-on-linux-vm-what-is-iaas-overview.md)
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
VMware 6.7 onwards had TLS enabled as the communication protocol.
1. Copy the following registry settings, and paste them into Notepad. Then save the file as TLS.REG without the .txt extension.
- ```text
+ ```
Windows Registry Editor Version 5.00
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Supported clients:
## Get started with PowerShell
-1. Download the [latest PowerShell module](https://github.com/Azure/azure-powershell/tree/Az.RecoveryServices-preview) (preview).
+1. Run the following command in PowerShell:
+
+ ```azurepowershell
+ install-module -name Az.RecoveryServices -Repository PSGallery -RequiredVersion 4.0.0-preview -AllowPrerelease -force
+ ```
+ 1. Connect to Azure using the [Connect-AzAccount](https://docs.microsoft.com/powershell/module/az.accounts/connect-azaccount) cmdlet. 1. Sign into your subscription:
backup Backup Instant Restore Capability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-instant-restore-capability.md
The new model doesn't allow deleting the restore point (Tier2) unless the snapsh
### Why does my snapshot still exist, even after the set retention period in backup policy?
-If the recovery point has a snapshot and it's the latest recovery point available, it's retained until the next successful backup. This is according to the designated "garbage collection" (GC) policy. It mandates that at least one latest recovery point always be present, in case all subsequent backups fail due to an issue in the VM. In normal scenarios, recovery points are cleaned up at most 24 hours after they expire.
+If the recovery point has a snapshot and it's the latest recovery point available, it's retained until the next successful backup. This is according to the designated "garbage collection" (GC) policy. It mandates that at least one latest recovery point always be present, in case all subsequent backups fail due to an issue in the VM. In normal scenarios, recovery points are cleaned up at most 24 hours after they expire. In rare scenarios, there might be one or two additional snapshots based on the heavier load on the garbage collector (GC).
+
+### Why do I see more snapshots than my retention policy?
+
+In a scenario where a retention policy is set as ΓÇ£1ΓÇ¥, you can find two snapshots. This mandates that at least one latest recovery point always be present, in case all subsequent backups fail due to an issue in the VM. This can cause the presence of two snapshots.<br></br>So, if the policy is for "n" snapshots, you can find ΓÇ£n+1ΓÇ¥ snapshots at times. Further, you can even find ΓÇ£n+1+2ΓÇ¥ snapshots if there is a delay in garbage collection. This can happen at rare times when:
+- You clean up snapshots, which are past retention.
+- The garbage collector (GC) in the backend is under heavy load.
### I donΓÇÖt need Instant Restore functionality. Can it be disabled?
Instant restore feature is enabled for everyone and can't be disabled. You can r
### Is it safe to restart the VM during the transfer process (which can take many hours)? Will restarting the VM interrupt or slow down the transfer?
-Yes its safe, and there is absolutely no impact in data transfer speed.
+Yes it's safe, and there is absolutely no impact in data transfer speed.
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-protection-matrix.md
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | -- | | | | | | Client computers (64-bit) | Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
-| Servers (64-bit) | Windows Server 2019, 2016, 2012 R2, 2012 | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
+| Servers (64-bit) | Windows Server 2019, 2016, 2012 R2, 2012 | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
| Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 | Volume, share, folder, file, system state/bare metal |
-| SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 | All deployment scenarios: database <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes |
+| SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 | All deployment scenarios: database <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
| Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V3 UR1 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 | | SharePoint | SharePoint 2019, 2016 with latest SPs | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 | Protect (all deployment scenarios): Farm, frontend web server content <br><br> Recover (all deployment scenarios): Farm, database, web application, file, or list item, SharePoint search, frontend web server <br><br> Protecting a SharePoint farm that's using the SQL Server 2012 AlwaysOn feature for the content databases isn't supported. |
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | - | | - | |
-| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: Hyper-V computers, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folder available only for Windows, volumes, virtual hard drives |
-| VMware VMs | VMware server 5.5, 6.0, or 6.5, 6.7 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folder available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
+| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: Hyper-V computers, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| VMware VMs | VMware server 5.5, 6.0, or 6.5, 6.7 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
+
+>[!NOTE]
+> MABS doesn't support backup of virtual machines with pass-through disks or those that use a remote VHD. We recommend that in these scenarios you use guest-level backup using MABS, and install an agent on the virtual machine to back up the data.
## Linux | **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | -- | | - | |
-| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest | Physical server, On-premises Hyper-V VM, Windows VM in VMWare | V3 UR1 | Hyper-V must be running on Windows Server 2012 R2, Windows Server 2016 or Windows Server 2019. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
+| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest | Physical server, On-premises Hyper-V VM, Windows VM in VMware | V3 UR1 | Hyper-V must be running on Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
## Azure ExpressRoute support
With public peering: Ensure access to the following domains/addresses:
* 20.190.128.0/18 * 40.126.0.0/18 - With Microsoft peering, select the following services/regions and relevant community values: * Azure Active Directory (12076:5060)
Azure Backup Server can protect data in the following clustered applications:
* SQL Server - Azure Backup Server doesn't support backing up SQL Server databases hosted on cluster-shared volumes (CSVs).
+>[!NOTE]
+>MABS only supports the protection of Hyper-V virtual machines on Cluster Shared Volumes (CSVs). Protecting other workloads hosted on CSVs isn't supported.
+ Azure Backup Server can protect cluster workloads that are located in the same domain as the MABS server, and in a child or trusted domain. If you want to protect data sources in untrusted domains or workgroups, use NTLM or certificate authentication for a single server, or certificate authentication only for a cluster.+
+## Data protection issues
+
+* MABS can't back up VMs using shared drives (which are potentially attached to other VMs) as the Hyper-V VSS writer can't back up volumes that are backed up by shared VHDs.
+
+* When you protect a shared folder, the path to the shared folder includes the logical path on the volume. If you move the shared folder, protection will fail. If you must move a protected shared folder, remove it from its protection group and then add it to protection after the move. Also, if you change the path of a protected data source on a volume that uses the Encrypting File System (EFS) and the new file path exceeds 5120 characters, data protection will fail.
+
+* You can't change the domain of a protected computer and continue protection without disruption. Also, you can't change the domain of a protected computer and associate the existing replicas and recovery points with the computer when it's reprotected. If you must change the domain of a protected computer, then first remove the data sources on the computer from protection. Then protect the data source on the computer after it has a new domain.
+
+* You can't change the name of a protected computer and continue protection without disruption. Also, you can't change the name of a protected computer and associate the existing replicas and recovery points with the computer when it's reprotected. If you must change the name of a protected computer, then first remove the data sources on the computer from protection. Then protect the data source on the computer after it has a new name.
+
+* MABS automatically identifies the time zone of a protected computer during installation of the protection agent. If a protected computer is moved to a different time zone after protection is configured, ensure that you change the computer time in Control Panel. Then update the time zone in the MABS database.
+
+* MABS can protect workloads in the same domain as the MABS server, or in child and trusted domains. You can also protect the following workloads in workgroups and untrusted domains using NTLM or certificate authentication:
+
+ * SQL Server
+ * File Server
+ * Hyper-V
+
+ These workloads can be running on a single server or in a cluster configuration. To protect a workload that isn't in a trusted domain, see [Prepare computers in workgroups and untrusted domains](https://docs.microsoft.com/system-center/dpm/prepare-environment-for-dpm) for exact details of what's supported and what authentication is required.
+
+## Unsupported data types
+
+MABS doesn't support protecting the following data types:
+
+* Hard links
+
+* Reparse points, including DFS links and junction points
+
+* Mount point metadata - A protection group can contain data with mount points. In this case DPM protects the mounted volume that is the target of the mount point, but it doesn't protect the mount point metadata. When you recover data containing mount points, you'll need to manually recreate your mount point hierarchy.
+
+* Data in mounted volumes within mounted volumes
+
+* Recycle Bin
+
+* Paging files
+
+* System Volume Information folder. To protect system information for a computer, you'll need to select the computer's system state as the protect group member.
+
+* Non-NTFS volumes
+
+* Files containing hard links or symbolic links from Windows Vista.
+
+* Data on file shares hosting UPDs (User Profile Disks)
+
+* Files with any of the following combinations of attributes:
+
+ * Encryption and reparse
+
+ * Encryption and Single Instance Storage (SIS)
+
+ * Encryption and case-sensitivity
+
+ * Encryption and sparse
+
+ * Case-sensitivity and SIS
+
+ * Compression and SIS
+
+## Next steps
+
+* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md)
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding min
| | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write | | On-demand backup of VM | Backup Operator | Recovery Services vault | | | Restore VM | Backup Operator | Recovery Services vault | |
-| | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
+| | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
| | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write | | Restore unmanaged disks VM backup | Backup Operator | Recovery Services vault | | | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
The following table captures the Backup management actions and corresponding min
| | Contributor | Resource group to which managed disk(s) will be restored | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write| | Restore individual files from VM backup | Backup Operator | Recovery Services vault | | | Virtual Machine Contributor | Source VM that got backed up | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write |
+| Cross region restore | Backup Operator | Subscription of the recovery Services vault | This is in addition of the restore permissions mentioned above. Specifically for CRR, instead of a built-in-role, you can consider a custom role which has the following permissions: "Microsoft.RecoveryServices/locations/backupAadProperties/read" "Microsoft.RecoveryServices/locations/backupCrrJobs/action" "Microsoft.RecoveryServices/locations/backupCrrJob/action" "Microsoft.RecoveryServices/locations/backupCrossRegionRestore/action" "Microsoft.RecoveryServices/locations/backupCrrOperationResults/read" "Microsoft.RecoveryServices/locations/backupCrrOperationsStatus/read" |
| Create backup policy for Azure VM backup | Backup Contributor | Recovery Services vault | | Modify backup policy of Azure VM backup | Backup Contributor | Recovery Services vault | | Delete backup policy of Azure VM backup | Backup Contributor | Recovery Services vault |
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Here's what's supported if you want to back up Linux machines.
Back up Linux Azure VMs with the Linux Azure VM agent | File consistent backup.<br/><br/> App-consistent backup using [custom scripts](backup-azure-linux-app-consistent.md).<br/><br/> During restore, you can create a new VM, restore a disk and use it to create a VM, or restore a disk and use it to replace a disk on an existing VM. You can also restore individual files and folders. Back up Linux Azure VMs with MARS agent | Not supported.<br/><br/> The MARS agent can only be installed on Windows machines. Back up Linux Azure VMs with DPM/MABS | Not supported.
+Backup Linux Azure VMs with docker mount points | Currently, Azure Backup doesnΓÇÖt support exclusion of docker mount points as these are mounted at different paths every time.
## Operating system support (Linux)
Shared storage| Backing up VMs using Cluster Shared Volume (CSV) or Scale-Out Fi
[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. Ultra SSD disks | Not supported. For more information, see these [limitations](selective-disk-backup-restore.md#limitations). [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Temporary disks aren't backed up by Azure Backup.
+NVMe/ephemeral disks | Not supported.
## VM network support
backup Backup Support Matrix Mabs Dpm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-mabs-dpm.md
DPM/MABS can be deployed as summarized in the following table.
**Deployment** | **Support** | **Details** | |
-**Deployed on-premises** | Physical server<br/><br/>Hyper-V VM<br/><br/> VMware VM | Refer to the [protection matrix](backup-mabs-protection-matrix.md) for more details.
+**Deployed on-premises** | Physical server, but not in a physical cluster.<br/><br/>Hyper-V VM. You can deploy MABS as a guest machine on a standalone hypervisor or cluster. It canΓÇÖt be deployed on a node of a cluster or standalone hypervisor. The Azure Backup Server is designed to run on a dedicated, single-purpose server.<br/><br/> As a Windows virtual machine in a VMware environment. | On-premises MABS servers can't protect Azure-based workloads. <br><br> For more information, see [protection matrix](backup-mabs-protection-matrix.md).
**Deployed as an Azure Stack VM** | MABS only | DPM can't be used to back up Azure Stack VMs.
-**Deployed as an Azure VM** | Protects Azure VMs and workloads that are running on those VMs | DPM/MABS running in Azure can't back up on-premises machines.
+**Deployed as an Azure VM** | Protects Azure VMs and workloads that are running on those VMs | DPM/MABS running in Azure can't back up on-premises machines. It can only protect workloads that are running in Azure IaaS VMs.
## Supported MABS and DPM operating systems
Azure Backup can back up DPM/MABS instances that are running any of the followin
**MABS upgrade** | You can directly install MABS v3, or upgrade to MABS v3 from MABS v2. [Learn more](backup-azure-microsoft-azure-backup.md#upgrade-mabs). **Moving MABS** | Moving MABS to a new server while retaining the storage is supported if you're using MBS.<br/><br/> The server must have the same name as the original. You can't change the name if you want to keep the same storage pool, and use the same MABS database to store data recovery points.<br/><br/> You'll need a backup of the MABS database because you'll need to restore it.
+>[!NOTE]
+>Renaming the DPM/MABS server isn't supported.
+ ## MABS support on Azure Stack You can deploy MABS on an Azure Stack VM so that you can manage backup of Azure Stack VMs and workloads from a single location.
No connectivity for more than 15 days | Expired/deprovisioned | No backup to dis
|Requirement |Details | ||| |Domain | The DPM/MABS server should be in a Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012 domain. |
-|Domain trust | DPM/MABS supports data protection across forests, as long as you establish a forest-level, two-way trust between the separate forests. <BR><BR> DPM/MABS can protect servers and workstations across domains, within a forest that has a two-way trust relationship with the DPM/MABS server domain. To protect computers in workgroups or untrusted domains, see [Back up and restore workloads in workgroups and untrusted domains.](/system-center/dpm/back-up-machines-in-workgroups-and-untrusted-domains) |
+|Domain trust | DPM/MABS supports data protection across forests, as long as you establish a forest-level, two-way trust between the separate forests. <BR><BR> DPM/MABS can protect servers and workstations across domains, within a forest that has a two-way trust relationship with the DPM/MABS server domain. To protect computers in workgroups or untrusted domains, see [Back up and restore workloads in workgroups and untrusted domains.](/system-center/dpm/back-up-machines-in-workgroups-and-untrusted-domains) <br><br> To back up Hyper-V server clusters, they must be located in the same domain as the MABS server or in a trusted or child domain. You can back up servers and clusters in an untrusted domain or workload using NTLM or certificate authentication for a single server, or certificate authentication only for a cluster. |
## DPM/MABS storage support Data that's backed up to DPM/MABS is stored on local disk storage.
+USB or removable drives aren't supported.
+
+NTFS compression isn't supported on DPM/MABS volumes.
+
+BitLocker can only be enabled after you add the disk the storage pool. Don't enable BitLocker before adding it.
+
+Network-attached storage (NAS) isn't supported for use in the DPM storage pool.
+ **Storage** | **Details** | **MBS** | Modern backup storage (MBS) is supported from DPM 2016/MABS v2 and later. It isn't available for MABS v1.
For information on the various servers and workloads that you can protect with D
- Clustered workloads backed up by DPM/MABS should be in the same domain as DPM/MABS or in a child/trusted domain. - You can use NTLM/certificate authentication to back up data in untrusted domains or workgroups.
+## Deduplicated volumes support
+
+>[!NOTE]
+> Deduplication support for MABS depends on operating system support.
+
+### For NTFS volumes
+
+| Operating system of protected server | Operating system of MABS server | MABS version | Dedup support |
+| | - | | -- |
+| Windows Server 2019 | Windows Server 2019 | MABS v3 | Y |
+| Windows Server 2016 | Windows Server 2019 | MABS v3 | Y* |
+| Windows Server 2012 R2 | Windows Server 2019 | MABS v3 | N |
+| Windows Server 2012 | Windows Server 2019 | MABS v3 | N |
+| Windows Server 2019 | Windows Server 2016 | MABS v3 | Y** |
+| Windows Server 2016 | Windows Server 2016 | MABS v3 | Y |
+| Windows Server 2012 R2 | Windows Server 2016 | MABS v3 | Y |
+| Windows Server 2012 | Windows Server 2016 | MABS v3 | Y |
+
+- \* When protecting a WS 2016 NTFS deduped volume with MABS v3 running on WS 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way. Reach out to MABS support if you need this fix on MABS v3 UR1.
+- \** When protecting a WS 2019 NTFS deduped volume with MABS v3 on WS 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume.
+
+**Issue**: If you upgrade the protected server operating system from Windows Server 2016 to Windows Server 2019, then the backup of the NTFS deduped volume will be affected due to changes in the deduplication logic.
+
+**Workaround**: Reach out to MABS support in case you need this fix for MABS v3 UR1.
+
+### For ReFS Volumes
+
+>[!NOTE]
+> We have identified a few issues with backups of deduplicated ReFS volumes. We are working on fixing these, and will update this section as soon as we have a fix available. Until then, we are removing the support for backup of deduplicated ReFS volumes from MABS v3.
+>
+> MABS v3 UR1 and later continues to support protection and recovery of normal ReFS volumes.
+ ## Next steps - [Learn more](backup-architecture.md#architecture-back-up-to-dpmmabs) about MABS architecture.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-custom-ssl.md
The following table shows the operation progress that occurs when you disable HT
To ensure a newer certificate is deployed to PoP infrastructure, upload your new certificate to Azure KeyVault. In your TLS settings on Azure CDN, choose the newest certificate version and select save. Azure CDN will then propagate your new updated cert.
+8. *Do I need to re-enable HTTPS after the endpoint restarts?*
+
+ Yes. If you're using **Azure CDN from Akamai**, if the endpoint stops and restarts, you must re-enable the HTTPS setting if the setting was active before.
++ ## Next steps In this tutorial, you learned how to:
In this tutorial, you learned how to:
Advance to the next tutorial to learn how to configure caching on your CDN endpoint. > [!div class="nextstepaction"]
-> [Tutorial: Set Azure CDN caching rules](cdn-caching-rules-tutorial.md)
+> [Tutorial: Set Azure CDN caching rules](cdn-caching-rules-tutorial.md)
cloud-services-extended-support Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/overview.md
The major differences between Cloud Services (classic) and Cloud Services (exten
## Migration to Azure Resource Manager
-Cloud Services (extended support) provides two paths for you to migrate from [Azure Service Manager](/powershell/azure/servicemanagement/overview?preserve-view=true&view=azuresmps-4.0.0) to [Azure Resource Manager](../azure-resource-manager/management/overview.md).
+Cloud Services (extended support) provides two paths for you to migrate from [Azure Service Manager](/powershell/azure/servicemanagement/overview) to [Azure Resource Manager](../azure-resource-manager/management/overview.md).
1) Customers deploy cloud services directly in Azure Resource Manager and then delete the old cloud service in Azure Service Manager. 2) In-place migration supports the ability to migrate Cloud Services (classic) with minimal to no downtime to Cloud Services (extended support).
cloud-services Cloud Services Configuration And Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-configuration-and-management-faq.md
$cert = New-SelfSignedCertificate -DnsName yourdomain.cloudapp.net -CertStoreLoc
$password = ConvertTo-SecureString -String "your-password" -Force -AsPlainText Export-PfxCertificate -Cert $cert -FilePath ".\my-cert-file.pfx" -Password $password ```
-Ability to choose blob or local for your csdef and cscfg upload location is coming soon. Using [New-AzureDeployment](/powershell/module/servicemanagement/azure.service/new-azuredeployment?view=azuresmps-4.0.0&preserve-view=true), you can set each location value.
+Ability to choose blob or local for your csdef and cscfg upload location is coming soon. Using [New-AzureDeployment](/powershell/module/servicemanagement/azure.service/new-azuredeployment), you can set each location value.
Ability to monitor metrics at the instance level. Additional monitoring capabilities are available in [How to Monitor Cloud Services](cloud-services-how-to-monitor.md).
The journal settings are non-configurable, so you can't turn it off.
You can enable Antimalware extension using PowerShell script in the Startup Task. Follow the steps in these articles to implement it: - [Create a PowerShell startup task](cloud-services-startup-tasks-common.md#create-a-powershell-startup-task)-- [Set-AzureServiceAntimalwareExtension](/powershell/module/servicemanagement/azure.service/Set-AzureServiceAntimalwareExtension?view=azuresmps-4.0.0&preserve-view=true)
+- [Set-AzureServiceAntimalwareExtension](/powershell/module/servicemanagement/azure.service/Set-AzureServiceAntimalwareExtension)
For more information about Antimalware deployment scenarios and how to enable it from the portal, see [Antimalware Deployment Scenarios](../security/fundamentals/antimalware.md#antimalware-deployment-scenarios).
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-diagnostics-powershell.md
You can collect diagnostic data like application logs, performance counters etc. from a Cloud Service using the Azure Diagnostics extension. This article describes how to enable the Azure Diagnostics extension for a Cloud Service using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. ## Enable diagnostics extension as part of deploying a Cloud Service
-This approach is applicable to continuous integration type of scenarios, where the diagnostics extension can be enabled as part of deploying the cloud service. When creating a new Cloud Service deployment, you can enable the diagnostics extension by passing in the *ExtensionConfiguration* parameter to the [New-AzureDeployment](/powershell/module/servicemanagement/azure.service/new-azuredeployment?view=azuresmps-3.7.0&preserve-view=true&preserve-view=true) cmdlet. The *ExtensionConfiguration* parameter takes an array of diagnostics configurations that can be created using the [New-AzureServiceDiagnosticsExtensionConfig](/powershell/module/servicemanagement/azure.service/new-azureservicediagnosticsextensionconfig?view=azuresmps-3.7.0&preserve-view=true) cmdlet.
+This approach is applicable to continuous integration type of scenarios, where the diagnostics extension can be enabled as part of deploying the cloud service. When creating a new Cloud Service deployment, you can enable the diagnostics extension by passing in the *ExtensionConfiguration* parameter to the [New-AzureDeployment](/powershell/module/servicemanagement/azure.service/new-azuredeployment) cmdlet. The *ExtensionConfiguration* parameter takes an array of diagnostics configurations that can be created using the [New-AzureServiceDiagnosticsExtensionConfig](/powershell/module/servicemanagement/azure.service/new-azureservicediagnosticsextensionconfig) cmdlet.
The following example shows how you can enable diagnostics for a cloud service with a WebRole and WorkerRole, each having a different diagnostics configuration.
$workerrole_diagconfig = New-AzureServiceDiagnosticsExtensionConfig -Role "Worke
``` ## Enable diagnostics extension on an existing Cloud Service
-You can use the [Set-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/set-azureservicediagnosticsextension?view=azuresmps-3.7.0&preserve-view=true) cmdlet to enable or update diagnostics configuration on a Cloud Service that is already running.
+You can use the [Set-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/set-azureservicediagnosticsextension) cmdlet to enable or update diagnostics configuration on a Cloud Service that is already running.
[!INCLUDE [cloud-services-wad-warning](../../includes/cloud-services-wad-warning.md)]
Set-AzureServiceDiagnosticsExtension -DiagnosticsConfiguration @($webrole_diagco
``` ## Get current diagnostics extension configuration
-Use the [Get-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/get-azureservicediagnosticsextension?view=azuresmps-3.7.0&preserve-view=true) cmdlet to get the current diagnostics configuration for a cloud service.
+Use the [Get-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/get-azureservicediagnosticsextension) cmdlet to get the current diagnostics configuration for a cloud service.
```powershell Get-AzureServiceDiagnosticsExtension -ServiceName "MyService" ``` ## Remove diagnostics extension
-To turn off diagnostics on a cloud service, you can use the [Remove-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/remove-azureservicediagnosticsextension?view=azuresmps-3.7.0&preserve-view=true) cmdlet.
+To turn off diagnostics on a cloud service, you can use the [Remove-AzureServiceDiagnosticsExtension](/powershell/module/servicemanagement/azure.service/remove-azureservicediagnosticsextension) cmdlet.
```powershell Remove-AzureServiceDiagnosticsExtension -ServiceName "MyService"
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-manage-portal.md
There are two key prerequisites for a successful deployment swap:
- If you want to use a static IP address for your production slot, you must reserve one for your staging slot as well. Otherwise, the swap fails. -- All instances of your roles must be running before you can perform the swap. You can check the status of your instances on the **Overview** blade of the Azure portal. Alternatively, you can use the [Get-AzureRole](/powershell/module/servicemanagement/azure.service/get-azurerole?view=azuresmps-3.7.0&preserve-view=true) command in Windows PowerShell.
+- All instances of your roles must be running before you can perform the swap. You can check the status of your instances on the **Overview** blade of the Azure portal. Alternatively, you can use the [Get-AzureRole](/powershell/module/servicemanagement/azure.service/get-azurerole) command in Windows PowerShell.
Note that guest OS updates and service healing operations also can cause deployment swaps to fail. For more information, see [Troubleshoot cloud service deployment problems](cloud-services-troubleshoot-deployment-problems.md).
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
This article explains how to quickly create a Cloud Services container using Azu
1. Install the Microsoft Azure PowerShell cmdlet from the [Azure PowerShell downloads](https://aka.ms/webpi-azps) page. 2. Open the PowerShell command prompt.
-3. Use the [Add-AzureAccount](/powershell/module/servicemanagement/azure.service/add-azureaccount?view=azuresmps-4.0.0&preserve-view=true) to sign in.
+3. Use the [Add-AzureAccount](/powershell/module/servicemanagement/azure.service/add-azureaccount) to sign in.
> [!NOTE] > For further instruction on installing the Azure PowerShell cmdlet and connecting to your Azure subscription, refer to [How to install and configure Azure PowerShell](/powershell/azure/).
Get-help New-AzureService
### Next steps
-* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure.service/Get-AzureService?view=azuresmps-4.0.0&preserve-view=true), [Remove-AzureService](/powershell/module/servicemanagement/azure.service/Remove-AzureService?view=azuresmps-4.0.0&preserve-view=true), and [Set-AzureService](/powershell/module/servicemanagement/azure.service/set-azureservice?view=azuresmps-4.0.0&preserve-view=true) commands. You may also refer to [How to configure cloud services](cloud-services-how-to-configure-portal.md) for further information.
+* To manage the cloud service deployment, refer to the [Get-AzureService](/powershell/module/servicemanagement/azure.service/Get-AzureService), [Remove-AzureService](/powershell/module/servicemanagement/azure.service/Remove-AzureService), and [Set-AzureService](/powershell/module/servicemanagement/azure.service/set-azureservice) commands. You may also refer to [How to configure cloud services](cloud-services-how-to-configure-portal.md) for further information.
* To publish your cloud service project to Azure, refer to the **PublishCloudService.ps1** code sample from [archived cloud services repository](https://github.com/MicrosoftDocs/azure-cloud-services-files/tree/master/Scripts/cloud-services-continuous-delivery).
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
Remote Desktop enables you to access the desktop of a role running in Azure. You
This article describes how to enable remote desktop on your Cloud Service Roles using PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for the prerequisites needed for this article. PowerShell utilizes the Remote Desktop Extension so you can enable Remote Desktop after the application is deployed. ## Configure Remote Desktop from PowerShell
-The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceremotedesktopextension?view=azuresmps-3.7.0&preserve-view=true) cmdlet allows you to enable Remote Desktop on specified roles or all roles of your cloud service deployment. The cmdlet lets you specify the Username and Password for the remote desktop user through the *Credential* parameter that accepts a PSCredential object.
+The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceremotedesktopextension) cmdlet allows you to enable Remote Desktop on specified roles or all roles of your cloud service deployment. The cmdlet lets you specify the Username and Password for the remote desktop user through the *Credential* parameter that accepts a PSCredential object.
If you are using PowerShell interactively, you can easily set the PSCredential object by calling the [Get-Credentials](/powershell/module/microsoft.powershell.security/get-credential) cmdlet.
ConvertTo-SecureString -String "Password123" -AsPlainText -Force | ConvertFrom-S
To create the credential object from the secure password file, you must read the file contents and convert them back to a secure string using [ConvertTo-SecureString](/powershell/module/microsoft.powershell.security/convertto-securestring).
-The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceremotedesktopextension?view=azuresmps-3.7.0&preserve-view=true) cmdlet also accepts an *Expiration* parameter, which specifies a **DateTime** at which the user account expires. For example, you could set the account to expire a few days from the current date and time.
+The [Set-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/set-azureserviceremotedesktopextension) cmdlet also accepts an *Expiration* parameter, which specifies a **DateTime** at which the user account expires. For example, you could set the account to expire a few days from the current date and time.
This PowerShell example shows you how to set the Remote Desktop Extension on a cloud service:
The Remote Desktop extension is associated with a deployment. If you create a ne
## Remote Desktop into a role instance
-The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure.service/get-azureremotedesktopfile?view=azuresmps-3.7.0&preserve-view=true) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the RDP file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance.
+The [Get-AzureRemoteDesktopFile](/powershell/module/servicemanagement/azure.service/get-azureremotedesktopfile) cmdlet is used to remote desktop into a specific role instance of your cloud service. You can use the *LocalPath* parameter to download the RDP file locally. Or you can use the *Launch* parameter to directly launch the Remote Desktop Connection dialog to access the cloud service role instance.
```powershell Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -Launch
Get-AzureRemoteDesktopFile -ServiceName $servicename -Name "WorkerRole1_IN_0" -L
## Check if Remote Desktop extension is enabled on a service
-The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/get-azureremotedesktopfile?view=azuresmps-3.7.0&preserve-view=true) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, this happens on the deployment slot and you can choose to use the staging slot instead.
+The [Get-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/get-azureremotedesktopfile) cmdlet displays that remote desktop is enabled or disabled on a service deployment. The cmdlet returns the username for the remote desktop user and the roles that the remote desktop extension is enabled for. By default, this happens on the deployment slot and you can choose to use the staging slot instead.
```powershell Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename
Get-AzureServiceRemoteDesktopExtension -ServiceName $servicename
If you have already enabled the remote desktop extension on a deployment, and need to update the remote desktop settings, first remove the extension. And enable it again with the new settings. For example, if you want to set a new password for the remote user account, or the account expired. Doing this is required on existing deployments that have the remote desktop extension enabled. For new deployments, you can simply apply the extension directly.
-To remove the remote desktop extension from the deployment, you can use the [Remove-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/remove-azureserviceremotedesktopextension?view=azuresmps-3.7.0&preserve-view=true) cmdlet. You can also optionally specify the deployment slot and role from which you want to remove the remote desktop extension.
+To remove the remote desktop extension from the deployment, you can use the [Remove-AzureServiceRemoteDesktopExtension](/powershell/module/servicemanagement/azure.service/remove-azureserviceremotedesktopextension) cmdlet. You can also optionally specify the deployment slot and role from which you want to remove the remote desktop extension.
```powershell Remove-AzureServiceRemoteDesktopExtension -ServiceName $servicename -UninstallConfiguration
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
The container also has the following container-specific configuration settings:
|No|Queue:Azure:QueueVisibilityTimeoutInMilliseconds | v3.x containers only. The time for a message to be invisible when another worker is processing it. | |No|Storage::DocumentStore::MongoDB|v2.0 containers only. Enables MongoDB for permanent result storage. | |No|Storage:ObjectStore:AzureBlob:ConnectionString| v3.x containers only. Azure blob storage connection string. |
+|No|Storage:TimeToLiveInDays| v3.x containers only. Result expiration period in days. The setting specifies when the system should clear recognition results. The default is 2 days (48 hours), which means any result live for longer than that period is not guaranteed to be successfully retrieved. |
+|No|Task:MaxRunningTimeSpanInMinutes| v3.x containers only. Maximum running time for a single request. The default is 60 minutes. |
## ApiKey configuration setting
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Install the 1.0.9 release:
sudo apt-get install iotedge=1.0.9* libiothsm-std=1.0.9* ```
-Next, register the host computer as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-register-device.md?view=iotedge-2018-06).
+Next, register the host computer as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-register-device.md).
You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
Install the 1.0.9 release:
sudo apt-get install iotedge=1.0.9* libiothsm-std=1.0.9* ```
-Next, register the VM as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-register-device.md?view=iotedge-2018-06).
+Next, register the VM as an IoT Edge device in your IoT Hub instance, using a [connection string](../../iot-edge/how-to-register-device.md).
You need to connect the IoT Edge device to your Azure IoT Hub. You need to copy the connection string from the IoT Edge device you created earlier. Alternatively, you can run the below command in the Azure CLI.
cognitive-services Luis Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-enterprise.md
If your app is meant to predict a wide variety of user utterances, consider impl
Schedule a periodic [review of endpoint utterances](luis-how-to-review-endpoint-utterances.md) for active learning, such as every two weeks, then retrain and republish. ## When you need to have more than 500 intents
-Assume you're developing an office assistant that has over 500 intents. If 200 intents relate to scheduling meetings, 200 are about reminders, 200 are about getting information about colleagues, and 200 are for sending email, group intents so that each group is in a single app, then create a top-level app containing each intent. Use the [dispatch model](#dispatch-tool-and-model) to build the top-level app. Then change your bot to use the cascading call as shown in the [dispatch model's tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs&view=azure-bot-service-4.0).
+Assume you're developing an office assistant that has over 500 intents. If 200 intents relate to scheduling meetings, 200 are about reminders, 200 are about getting information about colleagues, and 200 are for sending email, group intents so that each group is in a single app, then create a top-level app containing each intent. Use the [dispatch model](#dispatch-tool-and-model) to build the top-level app. Then change your bot to use the cascading call as shown in the [dispatch model's tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs).
## When you need to combine several LUIS and QnA maker apps
-If you have several LUIS and QnA maker apps that need to respond to a bot, use the [dispatch model](#dispatch-tool-and-model) to build the top-level app. Then change your bot to use the cascading call as shown in the [dispatch model's tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs&view=azure-bot-service-4.0).
+If you have several LUIS and QnA maker apps that need to respond to a bot, use the [dispatch model](#dispatch-tool-and-model) to build the top-level app. Then change your bot to use the cascading call as shown in the [dispatch model's tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs).
## Dispatch tool and model Use the [Dispatch][dispatch-tool] command-line tool, found in [BotBuilder-tools](https://github.com/Microsoft/botbuilder-tools) to combine multiple LUIS and/or QnA Maker apps into a parent LUIS app. This approach allows you to have a parent domain including all subjects and different child subject domains in separate apps.
The parent domain is noted in LUIS with a version named `Dispatch` in the apps l
The chat bot receives the utterance, then sends to the parent LUIS app for prediction. The top predicted intent from the parent app determines which LUIS child app is called next. The chat bot sends the utterance to the child app for a more specific prediction.
-Understand how this hierarchy of calls is made from the Bot Builder v4 [dispatcher-application-tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs&view=azure-bot-service-4.0).
+Understand how this hierarchy of calls is made from the Bot Builder v4 [dispatcher-application-tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs).
### Intent limits in dispatch model A dispatch application has 500 dispatch sources, equivalent to 500 intents, as the maximum.
A dispatch application has 500 dispatch sources, equivalent to 500 intents, as t
## More information * [Bot framework SDK](https://github.com/Microsoft/botframework)
-* [Dispatch model tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs&view=azure-bot-service-4.0)
+* [Dispatch model tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs)
* [Dispatch CLI](https://github.com/Microsoft/botbuilder-tools) * Dispatch model bot sample - [.NET](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/14.nlp-with-dispatch), [Node.js](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/javascript_nodejs/14.nlp-with-dispatch)
cognitive-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-traffic-manager.md
Remove the two LUIS endpoint keys, the three Traffic Manager profiles, and the r
## Next steps
-Review [middleware](/azure/bot-service/bot-builder-create-middleware?tabs=csaddmiddleware%252ccsetagoverwrite%252ccsmiddlewareshortcircuit%252ccsfallback%252ccsactivityhandler&view=azure-bot-service-4.0) options in BotFramework v4 to understand how this traffic management code can be added to a BotFramework bot.
+Review [middleware](/azure/bot-service/bot-builder-create-middleware?tabs=csaddmiddleware%252ccsetagoverwrite%252ccsmiddlewareshortcircuit%252ccsfallback%252ccsactivityhandler) options in BotFramework v4 to understand how this traffic management code can be added to a BotFramework bot.
[traffic-manager-marketing]: https://azure.microsoft.com/services/traffic-manager/ [traffic-manager-docs]: ../../traffic-manager/index.yml
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/troubleshooting.md
If you are using the Azure Bot Service and the issue is that the **Test in Web C
#### Resolve issue while debugging on local machine with Bot Framework.
-To learn more about local debugging of a bot, see [Debug a bot](/azure/bot-service/bot-service-debug-bot?view=azure-bot-service-4.0).
+To learn more about local debugging of a bot, see [Debug a bot](/azure/bot-service/bot-service-debug-bot).
## Integrating LUIS
cognitive-services Tutorial Intents Only https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/tutorial-intents-only.md
After LUIS returns the JSON response, LUIS is done with this request. LUIS doesn
* [How to train](luis-how-to-train.md) * [How to publish](luis-how-to-publish-app.md) * [How to test in LUIS portal](luis-interactive-test.md)
-* [Azure Bot](/azure/bot-service/?view=azure-bot-service-4.0)
+* [Azure Bot](/azure/bot-service/)
## Next steps
cognitive-services Multiturn Conversation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/multiturn-conversation.md
QnA Maker supports version control by including multi-turn conversation steps in
## Next steps
-Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/blob/master/samples/csharp_dotnetcore/adaptive-dialog/07.qnamaker/QnAMaker.csproj) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations?view=azure-bot-service-4.0).
+Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/blob/master/samples/csharp_dotnetcore/adaptive-dialog/07.qnamaker/QnAMaker.csproj) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).
> [!div class="nextstepaction"] > [Migrate a knowledge base](../Tutorials/migrate-knowledge-base.md)
cognitive-services Create Faq Bot With Azure Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Tutorials/create-faq-bot-with-azure-bot-service.md
When you make changes to the knowledge base and republish, you don't need to tak
The chat bot responds with an answer from your knowledge base. :::image type="content" source="../media/qnamaker-create-publish-knowledge-base/test-web-chat.png" alt-text="Enter a user query into the test web chat.":::
-1. Light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels?preserve-view=true&view=azure-bot-service-4.0).
+1. Light up the Bot in additional [supported channels](/azure/bot-service/bot-service-manage-channels).
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/troubleshooting.md
The name of the Azure Cognitive Search resource is the QnA Maker resource name w
<summary><b>Do I need to use Bot Framework in order to use QnA Maker?</b></summary> **Answer**:
-No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with QnA Maker. However, QnA Maker is offered as one of several templates in [Azure Bot Service](/azure/bot-service/?preserve-view=true&view=azure-bot-service-4.0). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
+No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with QnA Maker. However, QnA Maker is offered as one of several templates in [Azure Bot Service](/azure/bot-service/). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
</details>
Follow these steps to embed the QnA Maker service as a web-chat control in your
<summary><b>Do I need to use Bot Framework in order to use QnA Maker?</b></summary> **Answer**:
-No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with QnA Maker. However, QnA Maker is offered as one of several templates in [Azure Bot Service](/azure/bot-service/?preserve-view=true&view=azure-bot-service-4.0). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
+No, you do not need to use the [Bot Framework](https://github.com/Microsoft/botbuilder-dotnet) with QnA Maker. However, QnA Maker is offered as one of several templates in [Azure Bot Service](/azure/bot-service/). Bot Service enables rapid intelligent bot development through Microsoft Bot Framework, and it runs in a server-less environment.
</details>
cognitive-services Direct Line Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/direct-line-speech.md
Direct Line Speech and its associated functionality for voice assistants are an
## Reference docs * [Speech SDK](./speech-sdk.md)
-* [Azure Bot Service](/azure/bot-service/?view=azure-bot-service-4.0)
+* [Azure Bot Service](/azure/bot-service/)
## Next steps * [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free) * [Get the Speech SDK](speech-sdk.md)
-* [Create and deploy a basic bot](/azure/bot-service/bot-builder-tutorial-basic-deploy?view=azure-bot-service-4.0)
+* [Create and deploy a basic bot](/azure/bot-service/bot-builder-tutorial-basic-deploy)
* [Get the Virtual Assistant Solution and Enterprise Template](https://github.com/Microsoft/AI)
cognitive-services Faq Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-voice-assistants.md
If you can't find answers to your questions in this document, check out [other s
**A:** The best way to begin with creating a Custom Commands (Preview) application or basic Bot Framework bot. - [Create a Custom Commands (Preview) application](./quickstart-custom-commands-application.md)-- [Create a basic Bot Framework bot](/azure/bot-service/bot-builder-tutorial-basic-deploy?view=azure-bot-service-4.0)
+- [Create a basic Bot Framework bot](/azure/bot-service/bot-builder-tutorial-basic-deploy)
- [Connect a bot to the Direct Line Speech channel](/azure/bot-service/bot-service-channel-connect-directlinespeech) ## Debugging
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/long-audio-api.md
When preparing your text file, make sure it:
* For plain text, each paragraph is separated by hitting **Enter/Return** - View [plain text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/en-US.txt) * For SSML text, each SSML piece is considered a paragraph. SSML pieces shall be separated by different paragraphs - View [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt)
+## Sample code
+The remainder of this page will focus on Python, but sample code for the Long Audio API is available on GitHub for the following programming languages:
+
+* [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python)
+* [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/CSharp)
+* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/)
+ ## Python example This section contains Python examples that show the basic usage of the Long Audio API. Create a new Python project using your favorite IDE or editor. Then copy this code snippet into a file named `long_audio_synthesis_client.py`.
We support flexible audio output formats. You can generate audio outputs per par
* audio-24khz-48kbitrate-mono-mp3 * audio-24khz-96kbitrate-mono-mp3 * audio-24khz-160kbitrate-mono-mp3-
-## Sample code
-Sample code for Long Audio API is available on GitHub.
-
-* [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python)
-* [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/CSharp)
-* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like public voices, and then provide the deployment id by setting `EndpointId`. This simplifies setting up custom voices. - **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents). - **C++/C#/Java**: Make your voice assistant or bot stop listening immediatedly. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.-- **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext?view=botbuilder-dotnet-stable) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.
+- **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.
- **C++/C#**: Use the Speech SDK on more platforms. The [Speech SDK nuget package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types. - **Java**: [`DialogServiceConnector`](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector) now has a `setSpeechActivityTemplate()` method that was unintentionally excluded from the language previously. This is equivalent to setting the `Conversation_Speech_Activity_Template` property and will request that all future Bot Framework activities originated by the Direct Line Speech service merge the provided content into their JSON payloads. - **Java**: Improved low level debugging. The [`Connection`](/java/api/com.microsoft.cognitiveservices.speech.connection) class now has a `MessageReceived` event, similar to other programing languages (C++, C#). This event provides low-level access to incoming data from the service and can be useful for diagnostics and debugging.
cognitive-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/tutorial-azure-function.md
+
+ Title: "Tutorial: Use an Azure Function to process stored documents"
+
+description: This guide shows you how to use an Azure function to trigger the processing of documents that are uploaded to an Azure blob storage container.
++++++ Last updated : 10/28/2020+++
+# Tutorial: Use an Azure Function to process stored documents
+
+You can use Form Recognizer as part of an automated data processing pipeline built with Azure Functions. This guide shows you how to use an Azure function to process documents that are uploaded to an Azure blob storage container. This workflow extracts table data from stored documents using the Form Recognizer Layout service and saves the table data in a .csv file in Azure. You can then display the data using Microsoft Power BI (not covered here).
+
+> [!div class="mx-imgBorder"]
+> ![azure service workflow diagram](./media/tutorial-azure-function/workflow-diagram.png)
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create an Azure Storage account
+> * Create an Azure Functions project
+> * Extract layout data from uploaded forms
+> * Upload layout data to Azure Storage
+
+## Prerequisites
+
+* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
+* <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to get your Form Recognizer key and endpoint. After it deploys, click **Go to resource**.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* A local PDF document to analyze. You can download this [sample document](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample-layout.pdf) to use.
+* [Python 3.8.x](https://www.python.org/downloads/) installed.
+* [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) installed.
+* [Azure Functions Core Tools](https://docs.microsoft.com/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash#install-the-azure-functions-core-tools) installed.
+* Visual Studio Code with the following extensions installed:
+ * [Azure Functions extension](https://docs.microsoft.com/azure/developer/python/tutorial-vs-code-serverless-python-01#visual-studio-code-python-and-the-azure-functions-extension)
+ * [Python extension](https://code.visualstudio.com/docs/python/python-tutorial#_install-visual-studio-code-and-the-python-extension)
+
+## Create an Azure Storage account
+
+[Create an Azure Storage account](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) on the Azure portal. Select **StorageV2** as the Account kind.
+
+On the left pane, select the **CORS** tab, and remove the existing CORS policy if any exists.
+
+Once that has deployed, create two empty blob storage containers, named **test** and **output**.
+
+## Create an Azure Functions project
+
+Open Visual Studio Code. If you've installed the Azure Functions extension, you should see an Azure logo on the left navigation pane. Select it. Create a new project, and when prompted create a local folder **coa_new** to contain the project.
+
+![VSCode create function button](./media/tutorial-azure-function/vs-code-create-function.png)
++
+You'll be prompted to configure a number of settings:
+* In the **Select a language** prompt, select Python.
+* In the **Select a template** prompt, select Azure Blob Storage trigger. Then give the default trigger a name.
+* In the **Select setting** prompt, opt to create new local app settings.
+* Select your Azure subscription with the storage account you created. Then you need to enter the name of the storage container (in this case, `test/{name}`)
+* Opt to open the project in the current window.
+
+![VSCode create prompt example](./media/tutorial-azure-function/vs-code-prompt.png)
+
+When you've completed these steps, VSCode will add a new Azure Function project with a *\_\_init\_\_.py* Python script. This script will be triggered when a file is uploaded to the **test** storage container, but it won't do anything.
+
+## Test the function
+
+Press F5 to run the basic function. VSCode will prompt you to select a storage account to interface with. Select the storage account you created and continue.
+
+Open Azure Storage Explorer and upload a sample PDF document to the **Test** container. Then check the VSCode terminal. The script should log that it was triggered by the PDF upload.
+
+![VSCode terminal test](./media/tutorial-azure-function/vs-code-terminal-test.png)
++
+Stop the script before continuing.
+
+## Add form processing code
+
+Next, you'll add your own code to the Python script to call the Form Recognizer service and parse the uploaded documents using the Form Recognizer [Layout API](concept-layout.md).
+
+In VSCode, navigate to the function's *requirements.txt* file. This defines the dependencies for your script. Add the following Python packages to the file:
+
+```
+cryptography
+azure-functions
+azure-storage-blob
+azure-identity
+requests
+pandas
+numpy
+```
+
+Then, open the *\_\_init\_\_.py* script. Add the following `import` statements:
+
+```Python
+import logging
+from azure.storage.blob import BlobServiceClient
+import azure.functions as func
+import json
+import time
+from requests import get, post
+import os
+from collections import OrderedDict
+import numpy as np
+import pandas as pd
+```
+
+You can leave the generated `main` function as-is. You'll add your custom code inside this function.
+
+```python
+# This part is automatically generated
+def main(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+```
+
+The following code block calls the Form Recognizer [Analyze Layout](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeLayoutAsync) API on the uploaded document. Fill in your endpoint and key values.
++
+# [version 2.0](#tab/2-0)
+
+```Python
+# This is the call to the Form Recognizer endpoint
+ endpoint = r"Your Form Recognizer Endpoint"
+ apim_key = "Your Form Recognizer Key"
+ post_url = endpoint + "/formrecognizer/v2.0/Layout/analyze"
+ source = myblob.read()
+
+ headers = {
+ # Request headers
+ 'Content-Type': 'application/pdf',
+ 'Ocp-Apim-Subscription-Key': apim_key,
+ }
+
+ text1=os.path.basename(myblob.name)
+```
+
+# [version 2.1 preview](#tab/2-1)
+
+```Python
+# This is the call to the Form Recognizer endpoint
+ endpoint = r"Your Form Recognizer Endpoint"
+ apim_key = "Your Form Recognizer Key"
+ post_url = endpoint + "/formrecognizer/v2.1-preview.2/Layout/analyze"
+ source = myblob.read()
+
+ headers = {
+ # Request headers
+ 'Content-Type': 'application/pdf',
+ 'Ocp-Apim-Subscription-Key': apim_key,
+ }
+
+ text1=os.path.basename(myblob.name)
+```
++
+> [!IMPORTANT]
+> Go to the Azure portal. If the Form Recognizer resource you created in the **Prerequisites** section deployed successfully, click the **Go to Resource** button under **Next Steps**. You can find your key and endpoint in the resource's **key and endpoint** page, under **resource management**.
+>
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the [Cognitive Services security](../cognitive-services-security.md) article.
+
+Next, add code to query the service and get the returned data.
++
+```Python
+resp = requests.post(url = post_url, data = source, headers = headers)
+ if resp.status_code != 202:
+ print("POST analyze failed:\n%s" % resp.text)
+ quit()
+ print("POST analyze succeeded:\n%s" % resp.headers)
+ get_url = resp.headers["operation-location"]
+
+ wait_sec = 25
+
+ time.sleep(wait_sec)
+ # The layout API is async therefore the wait statement
+
+ resp =requests.get(url = get_url, headers = {"Ocp-Apim-Subscription-Key": apim_key})
+
+ resp_json = json.loads(resp.text)
+
+
+ status = resp_json["status"]
+
+
+ if status == "succeeded":
+ print("Layout Analysis succeeded:\n%s")
+ results=resp_json
+ else:
+ print("GET Layout results failed:\n%s")
+ quit()
+
+ results=resp_json
+```
+
+Then add the following code to connect to the Azure Storage **output** container. Fill in your own values for the storage account name and key. You can get the key on the **Access keys** tab of your storage resource in the Azure portal.
+
+```Python
+# This is the connection to the blob storage, with the Azure Python SDK
+ blob_service_client = BlobServiceClient.from_connection_string("DefaultEndpointsProtocol=https;AccountName="Storage Account Name";AccountKey="storage account key";EndpointSuffix=core.windows.net")
+ container_client=blob_service_client.get_container_client("output")
+```
+
+The following code parses the returned Form Recognizer response, constructs a .csv file, and uploads it to the **output** container.
++
+> [!IMPORTANT]
+> You will likely need to edit this code to match the structure of your own form documents.
+
+```Python
+# The code below is how I extract the json format into tabular data
+ # Please note that you need to adjust the code below to your form structure
+ # It probably won't work out-of-box for your specific form
+ pages = results["analyzeResult"]["pageResults"]
+
+ def make_page(p):
+ res=[]
+ res_table=[]
+ y=0
+ page = pages[p]
+ for tab in page["tables"]:
+ for cell in tab["cells"]:
+ res.append(cell)
+ res_table.append(y)
+ y=y+1
+
+ res_table=pd.DataFrame(res_table)
+ res=pd.DataFrame(res)
+ res["table_num"]=res_table[0]
+ h=res.drop(columns=["boundingBox","elements"])
+ h.loc[:,"rownum"]=range(0,len(h))
+ num_table=max(h["table_num"])
+ return h, num_table, p
+
+ h, num_table, p= make_page(0)
+
+ for k in range(num_table+1):
+ new_table=h[h.table_num==k]
+ new_table.loc[:,"rownum"]=range(0,len(new_table))
+ row_table=pages[p]["tables"][k]["rows"]
+ col_table=pages[p]["tables"][k]["columns"]
+ b=np.zeros((row_table,col_table))
+ b=pd.DataFrame(b)
+ s=0
+ for i,j in zip(new_table["rowIndex"],new_table["columnIndex"]):
+ b.loc[i,j]=new_table.loc[new_table.loc[s,"rownum"],"text"]
+ s=s+1
+
+```
+
+Finally, the last block of code uploads the extracted table and text data to your blob storage element.
+
+```Python
+ # Here is the upload to the blob storage
+ tab1_csv=b.to_csv(header=False,index=False,mode='w')
+ name1=(os.path.splitext(text1)[0]) +'.csv'
+ container_client.upload_blob(name=name1,data=tab1_csv)
+```
+
+## Run the function
+
+Press F5 to run the function again. Use Azure Storage Explorer to upload a sample PDF form to the **Test** storage container. This action should trigger the script to run, and you should then see the resulting .csv file (displayed as a table) in the **output** container.
+
+You can connect this container to Power BI to create rich visualizations of the data it contains.
+
+## Next steps
+
+In this tutorial, you learned how to use an Azure Function written in Python to automatically process uploaded PDF documents and output their contents in a more data-friendly format. Next, learn how to use Power BI to display the data.
+
+> [!div class="nextstepaction"]
+> [Microsoft Power BI](https://powerbi.microsoft.com/integrations/azure-table-storage/)
+
+* [What is Form Recognizer?](overview.md)
+* Learn more about the [Layout API](concept-layout.md)
cognitive-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-cache-token.md
-+ Last updated 01/14/2020
cognitive-services How To Configure Read Aloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-configure-read-aloud.md
-+ Last updated 06/29/2020
cognitive-services How To Configure Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-configure-translation.md
-+ Last updated 06/29/2020
cognitive-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-create-immersive-reader.md
-+ Last updated 07/22/2019
cognitive-services How To Customize Launch Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-customize-launch-button.md
Title: "Customize the Immersive Reader button"
+ Title: "Edit the Immersive Reader launch button"
description: This article will show you how to customize the button that launches the Immersive Reader.
- Previously updated : 01/14/2020+ Last updated : 03/08/2021
cognitive-services How To Launch Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-launch-immersive-reader.md
+
+ Title: "How to launch the Immersive Reader"
+
+description: Learn how to launch the Immersive reader using JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
+++++ Last updated : 03/04/2021++
+zone_pivot_groups: immersive-reader-how-to-guides
++
+# How to launch the Immersive Reader
+
+In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This article demonstrates how to launch the Immersive Reader JavaScript, Python, Android, or iOS.
++++++++++++
cognitive-services How To Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-multiple-resources.md
+
+ Title: "Integrate multiple Immersive Reader resources"
+
+description: In this tutorial, you'll create a Node.js application that launches the Immersive Reader using multiple Immersive Reader resources.
++++++ Last updated : 01/14/2020++
+#Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer.
++
+# Integrate multiple Immersive Reader resources
+
+In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. In the [quickstart](./quickstarts/client-libraries.md), you learned how to use Immersive Reader with a single resource. This tutorial covers how to integrate multiple Immersive Reader resources in the same application. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create multiple Immersive Reader resource under an existing resource group
+> * Launch the Immersive Reader using multiple resources
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+* Follow the [quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to create a web app that launches the Immersive Reader with NodeJS. In that quickstart, you configure a single Immersive Reader resource. We will build on top of that in this tutorial.
+
+## Create the Immersive Reader resources
+
+Follow [these instructions](./how-to-create-immersive-reader.md) to create each Immersive Reader resource. The **Create-ImmersiveReaderResource** script has `ResourceName`, `ResourceSubdomain`, and `ResourceLocation` as parameters. These should be unique for each resource being created. The remaining parameters should be the same as what you used when setting up your first Immersive Reader resource. This way, each resource can be linked to the same Azure resource group and Azure AD application.
+
+The example below shows how to create two resources, one in WestUS, and another in EastUS. Notice the unique values for `ResourceName`, `ResourceSubdomain`, and `ResourceLocation`.
+
+```azurepowershell-interactive
+Create-ImmersiveReaderResource
+ -SubscriptionName <SUBSCRIPTION_NAME> `
+ -ResourceName Resource_name_wus `
+ -ResourceSubdomain resource-subdomain-wus `
+ -ResourceSKU <RESOURCE_SKU> `
+ -ResourceLocation westus `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
+ -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
+ -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
+ -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+
+Create-ImmersiveReaderResource
+ -SubscriptionName <SUBSCRIPTION_NAME> `
+ -ResourceName Resource_name_eus `
+ -ResourceSubdomain resource-subdomain-eus `
+ -ResourceSKU <RESOURCE_SKU> `
+ -ResourceLocation eastus `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
+ -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
+ -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
+ -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+```
+
+## Add resources to environment configuration
+
+In the quickstart, you created an environment configuration file that contains the `TenantId`, `ClientId`, `ClientSecret`, and `Subdomain` parameters. Since all of your resources use the same Azure AD application, we can use the same values for the `TenantId`, `ClientId`, and `ClientSecret`. The only change that needs to be made is to list each subdomain for each resource.
+
+Your new __.env__ file should now look something like the following:
+
+```text
+TENANT_ID={YOUR_TENANT_ID}
+CLIENT_ID={YOUR_CLIENT_ID}
+CLIENT_SECRET={YOUR_CLIENT_SECRET}
+SUBDOMAIN_WUS={YOUR_WESTUS_SUBDOMAIN}
+SUBDOMAIN_EUS={YOUR_EASTUS_SUBDOMAIN}
+```
+
+Be sure not to commit this file into source control, as it contains secrets that should not be made public.
+
+Next, we're going to modify the _routes\index.js_ file that we created to support our multiple resources. Replace its content with the following code.
+
+As before, this code creates an API endpoint that acquires an Azure AD authentication token using your service principal password. This time, it allows the user to specify a resource location and pass it in as a query parameter. It then returns an object containing the token and the corresponding subdomain.
+
+```javascript
+var express = require('express');
+var router = express.Router();
+var request = require('request');
+
+/* GET home page. */
+router.get('/', function(req, res, next) {
+ res.render('index', { Title: 'Express' });
+});
+
+router.get('/GetTokenAndSubdomain', function(req, res) {
+ try {
+ request.post({
+ headers: {
+ 'content-type': 'application/x-www-form-urlencoded'
+ },
+ url: `https://login.windows.net/${process.env.TENANT_ID}/oauth2/token`,
+ form: {
+ grant_type: 'client_credentials',
+ client_id: process.env.CLIENT_ID,
+ client_secret: process.env.CLIENT_SECRET,
+ resource: 'https://cognitiveservices.azure.com/'
+ }
+ },
+ function(err, resp, tokenResult) {
+ if (err) {
+ console.log(err);
+ return res.status(500).send('CogSvcs IssueToken error');
+ }
+
+ var tokenResultParsed = JSON.parse(tokenResult);
+
+ if (tokenResultParsed.error) {
+ console.log(tokenResult);
+ return res.send({error : "Unable to acquire Azure AD token. Check the debugger for more information."})
+ }
+
+ var token = tokenResultParsed.access_token;
+
+ var subdomain = "";
+ var region = req.query && req.query.region;
+ switch (region) {
+ case "eus":
+ subdomain = process.env.SUBDOMAIN_EUS
+ break;
+ case "wus":
+ default:
+ subdomain = process.env.SUBDOMAIN_WUS
+ }
+
+ return res.send({token, subdomain});
+ });
+ } catch (err) {
+ console.log(err);
+ return res.status(500).send('CogSvcs IssueToken error');
+ }
+});
+
+module.exports = router;
+```
+
+The **getimmersivereaderlaunchparams** API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+
+## Launch the Immersive Reader with sample content
+
+1. Open _views\index.pug_, and replace its content with the following code. This code populates the page with some sample content, and adds two buttons that launches the Immersive Reader. One for launching Immersive Reader for the EastUS resource, and another for the WestUS resource.
+
+ ```pug
+ doctype html
+ html
+ head
+ title Immersive Reader Quickstart Node.js
+
+ link(rel='stylesheet', href='https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css')
+
+ // A polyfill for Promise is needed for IE11 support.
+ script(src='https://cdn.jsdelivr.net/npm/promise-polyfill@8/dist/polyfill.min.js')
+
+ script(src='https://contentstorage.onenote.office.net/onenoteltir/immersivereadersdk/immersive-reader-sdk.1.0.0.js')
+ script(src='https://code.jquery.com/jquery-3.3.1.min.js')
+
+ style(type="text/css").
+ .immersive-reader-button {
+ background-color: white;
+ margin-top: 5px;
+ border: 1px solid black;
+ float: right;
+ }
+ body
+ div(class="container")
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("wus")') WestUS Immersive Reader
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("eus")') EastUS Immersive Reader
+
+ h1(id="ir-title") About Immersive Reader
+ div(id="ir-content" lang="en-us")
+ p Immersive Reader is a tool that implements proven techniques to improve reading comprehension for emerging readers, language learners, and people with learning differences. The Immersive Reader is designed to make reading more accessible for everyone. The Immersive Reader
+
+ ul
+ li Shows content in a minimal reading view
+ li Displays pictures of commonly used words
+ li Highlights nouns, verbs, adjectives, and adverbs
+ li Reads your content out loud to you
+ li Translates your content into another language
+ li Breaks down words into syllables
+
+ h3 The Immersive Reader is available in many languages.
+
+ p(lang="es-es") El Lector inmersivo está disponible en varios idiomas.
+ p(lang="zh-cn") 沉浸式阅读器支持许多语言
+ p(lang="de-de") Der plastische Reader ist in vielen Sprachen verf├╝gbar.
+ p(lang="ar-eg" dir="rtl" style="text-align:right") يتوفر \"القارئ الشامل\" في العديد من اللغات.
+
+ script(type="text/javascript").
+ function getTokenAndSubdomainAsync(region) {
+ return new Promise(function (resolve, reject) {
+ $.ajax({
+ url: "/GetTokenAndSubdomain",
+ type: "GET",
+ data: {
+ region: region
+ },
+ success: function (data) {
+ if (data.error) {
+ reject(data.error);
+ } else {
+ resolve(data);
+ }
+ },
+ error: function (err) {
+ reject(err);
+ }
+ });
+ });
+ }
+
+ function handleLaunchImmersiveReader(region) {
+ getTokenAndSubdomainAsync(region)
+ .then(function (response) {
+ const token = response["token"];
+ const subdomain = response["subdomain"];
+ // Learn more about chunk usage and supported MIME types https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk
+ const data = {
+ Title: $("#ir-title").text(),
+ chunks: [{
+ content: $("#ir-content").html(),
+ mimeType: "text/html"
+ }]
+ };
+ // Learn more about options https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#options
+ const options = {
+ "onExit": exitCallback,
+ "uiZIndex": 2000
+ };
+ ImmersiveReader.launchAsync(token, subdomain, data, options)
+ .catch(function (error) {
+ alert("Error in launching the Immersive Reader. Check the console.");
+ console.log(error);
+ });
+ })
+ .catch(function (error) {
+ alert("Error in getting the Immersive Reader token and subdomain. Check the console.");
+ console.log(error);
+ });
+ }
+
+ function exitCallback() {
+ console.log("This is the callback function. It is executed when the Immersive Reader closes.");
+ }
+ ```
+
+3. Our web app is now ready. Start the app by running:
+
+ ```bash
+ npm start
+ ```
+
+4. Open your browser and navigate to `http://localhost:3000`. You should see the above content on the page. Click on either the **EastUS Immersive Reader** button or the **WestUS Immersive Reader** button to launch the Immersive Reader using those respective resources.
+
+## Next steps
+
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
+* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
cognitive-services How To Prepare Html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-prepare-html.md
+
+ Title: "How to prepare HTML content for Immersive Reader"
+
+description: Learn how to launch the Immersive reader using HTML, JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
+++++ Last updated : 03/04/2021+++
+# How to prepare HTML content for Immersive Reader
+
+This article shows you how to structure your HTML and retrieve the content, so that it can be used by Immersive Reader.
+
+## Prepare the HTML content
+
+Place the content that you want to render in the Immersive Reader inside of a container element. Be sure that the container element has a unique `id`. The Immersive Reader provides support for basic HTML elements, see the [reference](reference.md#html-support) for more information.
+
+```html
+<div id='immersive-reader-content'>
+ <b>Bold</b>
+ <i>Italic</i>
+ <u>Underline</u>
+ <strike>Strikethrough</strike>
+ <code>Code</code>
+ <sup>Superscript</sup>
+ <sub>Subscript</sub>
+ <ul><li>Unordered lists</li></ul>
+ <ol><li>Ordered lists</li></ol>
+</div>
+```
+
+## Get the HTML content in JavaScript
+
+Use the `id` of the container element to get the HTML content in your JavaScript code.
+
+```javascript
+const htmlContent = document.getElementById('immersive-reader-content').innerHTML;
+```
+
+## Launch the Immersive Reader with your HTML content
+
+When calling `ImmersiveReader.launchAsync`, set the chunk's `mimeType` property to `text/html` to enable rendering HTML.
+
+```javascript
+const data = {
+ chunks: [{
+ content: htmlContent,
+ mimeType: 'text/html'
+ }]
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](reference.md)
cognitive-services How To Store User Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/how-to-store-user-preferences.md
-+ Last updated 06/29/2020
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/quickstarts/client-libraries.md
zone_pivot_groups: programming-languages-set-twenty
Previously updated : 09/14/2020 Last updated : 03/08/2021 keywords: display pictures, parts of speech, read selected text, translate words, reading comprehension
cognitive-services Tutorial Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/tutorial-android.md
- Title: "Tutorial: Launch the Immersive Reader using the Android code samples"-
-description: In this tutorial, you'll configure and run a sample Android application that launches the Immersive Reader.
------- Previously updated : 06/10/2020---
-#Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer.
--
-# Tutorial: Start the Immersive Reader using the Android Java code sample
-
-In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This tutorial covers how to create an Android application that starts the Immersive Reader. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Configure and run an app for Android by using a sample project.
-> * Acquire an access token.
-> * Start the Immersive Reader with sample content.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-* An Immersive Reader resource configured for Azure Active Directory authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You'll need some of the values created here when you configure the environment properties. Save the output of your session into a text file for future reference.
-* [Git](https://git-scm.com/).
-* [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk).
-* [Android Studio](https://developer.android.com/studio).
-
-## Configure authentication credentials
-
-1. Start Android Studio, and open the project from the **immersive-reader-sdk/js/samples/quickstart-java-android** directory (Java) or the **immersive-reader-sdk/js/samples/quickstart-kotlin** directory (Kotlin).
-
-1. Create a file named **env** inside the **/assets** folder. Add the following names and values, and supply values as appropriate. Don't commit this file into source control because it contains secrets that shouldn't be made public.
-
- ```text
- TENANT_ID=<YOUR_TENANT_ID>
- CLIENT_ID=<YOUR_CLIENT_ID>
- CLIENT_SECRET=<YOUR_CLIENT_SECRET>
- SUBDOMAIN=<YOUR_SUBDOMAIN>
- ```
-
-## Start the Immersive Reader with sample content
-
-Choose a device emulator from the AVD Manager, and run the project.
-
-## Next steps
-
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK reference](./reference.md).
-* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/).
cognitive-services Tutorial Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/tutorial-ios.md
- Title: "Tutorial: Start the Immersive Reader using the Swift iOS code sample"-
-description: In this tutorial, you'll configure and run a sample Swift application that starts the Immersive Reader.
------- Previously updated : 06/10/2020-
-#Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer.
--
-# Tutorial: Start the Immersive Reader using the Swift iOS code sample
-
-In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This tutorial covers how to create an iOS application that starts the Immersive Reader. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Configure and run a Swift app for iOS by using a sample project.
-> * Acquire an access token.
-> * Start the Immersive Reader with sample content.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
-
-* An Immersive Reader resource configured for Azure Active Directory authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You'll need some of the values created here when you configure the environment properties. Save the output of your session into a text file for future reference.
-* [macOS](https://www.apple.com/macos).
-* [Git](https://git-scm.com/).
-* [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk).
-* [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12).
-
-## Configure authentication credentials
-
-1. Open Xcode, and open **immersive-reader-sdk/js/samples/ios/quickstart-swift/quickstart-swift.xcodeproj**.
-1. On the top menu, select **Product** > **Scheme** > **Edit Scheme**.
-1. In the **Run** view, select the **Arguments** tab.
-1. In the **Environment Variables** section, add the following names and values. Supply the values given when you created your Immersive Reader resource.
-
- ```text
- TENANT_ID=<YOUR_TENANT_ID>
- CLIENT_ID=<YOUR_CLIENT_ID>
- CLIENT_SECRET<YOUR_CLIENT_SECRET>
- SUBDOMAIN=<YOUR_SUBDOMAIN>
- ```
-
-Don't commit this change into source control because it contains secrets that shouldn't be made public.
-
-## Start the Immersive Reader with sample content
-
-In Xcode, select **Ctrl+R** to run the project.
-
-## Next steps
-
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK reference](./reference.md).
-* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/).
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
cognitive-services Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/whats-new-docs.md
Title: "Cognitive
-description: "What's new in the Cognitive Services docs for January 1, 2021 - January 31, 2021."
+description: "What's new in the Cognitive Services docs for February 1, 2020 - February 28, 2020."
Previously updated : 02/08/2021 Last updated : 03/08/2021
-# Cognitive Services docs: What's new for January 1, 2021 - January 31, 2021
+# Cognitive Services docs: What's new for February 1, 2021 - February 28, 2021
-Welcome to what's new in the Cognitive Services docs from January 1, 2021 through January 31, 2021. This article lists some of the major changes to docs during this period.
+Welcome to what's new in the Cognitive Services docs from February 1, 2021 through February 28, 2021. This article lists some of the major changes to docs during this period.
## Cognitive Services
-**Updated articles**
+### New articles
-- [Plan and manage costs for Azure Cognitive Services](plan-manage-costs.md)-- [Azure Cognitive Services containers](cognitive-services-container-support.md)
+- [Azure Policy Regulatory Compliance controls for Azure Cognitive Services](security-controls-policy.md)
-## Form Recognizer
-
-**New articles**
--- [Tutorial: Extract form data in bulk using Azure Data Factory](./form-recognizer/tutorial-bulk-processing.md)-
-**Updated articles**
+## Containers
-- [What is Form Recognizer?](./form-recognizer/overview.md)
+### New articles
-## Immersive Reader
+- [Azure Cognitive Services containers frequently asked questions (FAQ)](/azure/cognitive-services/containers/container-faq)
-**Updated articles**
+### Updated articles
-- [Create an Immersive Reader resource and configure Azure Active Directory authentication](./immersive-reader/how-to-create-immersive-reader.md)
+- [Azure Cognitive Services container image tags and release notes](/azure/cognitive-services/containers/container-image-tags)
-## Personalizer
+## Form Recognizer
-**Updated articles**
+### Updated articles
-- [Features are information about actions and context](./personalizer/concepts-features.md)
+- [Deploy the sample labeling tool](/azure/cognitive-services/form-recognizer/deploy-label-tool)
+- [What is Form Recognizer?](/azure/cognitive-services/form-recognizer/overview)
+- [Train a Form Recognizer model with labels using the sample labeling tool](/azure/cognitive-services/form-recognizer/quickstarts/label-tool)
## Text Analytics
-**Updated articles**
--- [Text Analytics API v3 language support](./text-analytics/language-support.md)-- [Migrate to version 3.x of the Text Analytics API](./text-analytics/migration-guide.md)-- [What's new in the Text Analytics API?](./text-analytics/whats-new.md)-
-## Community contributors
-
-The following people contributed to the Cognitive Services docs during this period. Thank you! Learn how to contribute by following the links under "Get involved" in the [what's new landing page](index.yml).
+### Updated articles
-- [AnweshGangula](https://github.com/AnweshGangula) - Anwesh Gangula (1)-- [cdglasz](https://github.com/cdglasz) - Christopher Glasz (1)-- [huybuidac](https://github.com/huybuidac) - Bui Dac Huy (1)
+- [Text Analytics API v3 language support](/azure/cognitive-services/text-analytics/language-support)
+- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api)
[!INCLUDE [Service specific updates](./includes/service-specific-updates.md)]
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/reference.md
The following table details the available Communication Services packages along
| Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | [PyPi](https://pypi.org/project/azure-mgmt-communication/) | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - | | Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - |
+| Phone numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.phonenumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - |
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - | | Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) ([docs](/objectivec/communication-services/calling/)) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Azure Communication Services capabilities are conceptually organized into six ar
| Azure Resource Manager | REST | Open | Azure.ResourceManager.Communication | Provision and manage Communication Services resources | | Common | REST | Open | Azure.Communication.Common | Provides base types for other client libraries | | Identity | REST | Open | Azure.Communication.Identity | Manage users, access tokens |
+| Phone numbers | REST | Open | Azure.Communication.PhoneNumbers | Managing phone numbers |
| Chat | REST with proprietary signaling | Open with closed source signaling package | Azure.Communication.Chat | Add real-time text based chat to your applications | | SMS | REST | Open | Azure.Communication.SMS | Send and receive SMS messages | | Calling | Proprietary transport | Closed |Azure.Communication.Calling | Leverage voice, video, screen-sharing, and other real-time data communication capabilities |
Publishing locations for individual client library packages are detailed below.
| Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | [PyPi](https://pypi.org/project/azure-mgmt-communication/) | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - | | Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - |
+| Phone Numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.PhoneNumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - |
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - | | Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
-| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/) | [docs](/java/api/com.azure.communication.calling?view=communication-services-java-android) | - |
+| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/) | [docs](/java/api/com.azure.communication.calling) | - |
## REST APIs
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following table represents the set of supported browsers which are currently
| Windows*** | ✔️ | ❌ | ✔️ | | Ubuntu/Linux | ✔️ | ❌ | ❌ |
-*Safari versions 13.1+ are supported.
+*Safari versions 13.1+ are supported, 1:1 calls are not supported on Safari.
**Safari 14+/macOS 11+ needed for outgoing video support.
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
The following resources are a great place to start if you're new to Azure Commun
| Resource |Description | | | | |**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.|
-|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|You can begin using Azure Communication Services by using the Azure portal or Communication Services Administration client library to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
-|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services Administration client library.|
+|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|You can begin using Azure Communication Services by using the Azure portal or Communication Services client library to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
+|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services client library.|
|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|You can use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate outbound calls and build SMS communications solutions.| |**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS client library allows you to send and receive SMS messages from your .NET and JavaScript applications.| |**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your apps using the Calling client library. This library is powered by WebRTC and allows you to establish peer-to-peer, multimedia, real-time communications within your applications.|
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
For more information, see the following articles:
- Familiarize yourself with [using the calling client library](../quickstarts/voice-video-calling/calling-client-samples.md) - Learn more about [how calling works](../concepts/voice-video-calling/about-call-types.md)-- Review the [API Reference docs](/javascript/api/azure-communication-services/@azure/communication-calling/?view=azure-communication-services-js)
+- Review the [API Reference docs](/javascript/api/azure-communication-services/@azure/communication-calling/)
- Review the [Contoso Med App](https://github.com/Azure-Samples/communication-services-contoso-med-app) sample ## Additional reading
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/hmac-header-tutorial.md
Title: How to sign an HTTP request C#
+ Title: Learn how to sign an HTTP request with C#
-description: Learn how to Sign an HTTP request Communication services via C#
+description: Learn how to sign an HTTP request for Azure Communication Services via C#.
In this tutorial, you'll learn how to sign an HTTP request with an HMAC signatur
## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Service resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Function Resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources).
+To clean up and remove a Communication Services subscription, delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Services resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Functions resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources).
## Next steps > [!div class="nextstepaction"] > [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
-You may also want to:
+You might also want to:
- [Add chat to your app](../quickstarts/chat/get-started.md)-- [Creating user access tokens](../quickstarts/access-tokens.md)
+- [Create user access tokens](../quickstarts/access-tokens.md)
- [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md)
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-geo-replication.md
A geo-replicated registry provides the following benefits:
* Improve performance and reliability of regional deployments with network-close registry access * Reduce data transfer costs by pulling image layers from a local, replicated registry in the same or nearby region as your container host * Single management of a registry across multiple regions
+* Registry resilience if a regional outage occurs
> [!NOTE] > If you need to maintain copies of container images in more than one Azure container registry, Azure Container Registry also supports [image import](container-registry-import-images.md). For example, in a DevOps workflow, you can import an image from a development registry to a production registry, without needing to use Docker commands.
Using the geo-replication feature of Azure Container Registry, these benefits ar
* Manage a single configuration of image deployments as all regions use the same image URL: `contoso.azurecr.io/public/products/web:1.2` * Push to a single registry, while ACR manages the geo-replication. ACR only replicates unique layers, reducing data transfer across regions. * Configure regional [webhooks](container-registry-webhook.md) to notify you of events in specific replicas.
+* Provide a highly available registry that is resilient to regional outages.
Azure Container Registry also supports [availability zones](zone-redundancy.md) to create a resilient and high availability Azure container registry within an Azure region. The combination of availability zones for redundancy within a region, and geo-replication across multiple regions, enhances both the reliability and performance of a registry.
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/introduction.md
Today's applications are required to be highly responsive and always online. To
Azure Cosmos DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. App development is faster and more productive thanks to turnkey multi region data distribution anywhere in the world, open source APIs and SDKs for popular languages. As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
+> [!NOTE]
+> Would you like to help improve Azure Cosmos DB docs by participating in a user study? Please take a few minutes to fill out this 5 minute [screening survey](https://aka.ms/cosmosdb-documentation-screener-survey). If you qualify, you are redirected to a scheduler where you can book a slot to join an interactive research session. No personal data is collected during this process as per our [privacy statement](https://go.microsoft.com/fwlink/?LinkId=521839).
+ You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments or use the [Azure Cosmos DB free tier](optimize-dev-test.md#azure-cosmos-db-free-tier) to get an account with the first 400 RU/s and 5 GB of storage free. > [!div class="nextstepaction"]
cosmos-db Optimize Cost Reads Writes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/optimize-cost-reads-writes.md
The RU cost of writing an item depends on:
- The item size. - The number of properties covered by the [indexing policy](index-policy.md) and needed to be indexed.
-Inserting a 1 KB item with less than 5 properties to index costs around 5 RUs. Replacing an item costs two times the charge required to insert the same item.
+Inserting a 1 KB item without indexing costs around ~5.5 RUs. Replacing an item costs two times the charge required to insert the same item.
### Optimizing writes
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
cost-management-billing Migrate From Enterprise Reporting To Azure Resource Manager Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/migrate-from-enterprise-reporting-to-azure-resource-manager-apis.md
Previously updated : 11/19/2020 Last updated : 03/10/2021
After you create a Service Principal to programmatically call the Azure Resource
### Azure Billing Hierarchy Access
-To assign Service Principal permissions to your Enterprise Billing Account, Departments, or Enrollment Account scopes, use [Billing Permissions](/rest/api/billing/2019-10-01-preview/billingpermissions), [Billing Role Definitions](/rest/api/billing/2019-10-01-preview/billingroledefinitions), and [Billing Role Assignments](/rest/api/billing/2019-10-01-preview/billingroleassignments) APIs.
--- Use the Billing Permissions APIs to identify the permissions that a Service Principal already has on a given scope, like a Billing Account or Department.-- Use the Billing Role Definitions APIs to enumerate the available roles that can be assigned to your Service Principal.
- - Only Read-Only EA Admin and Read-Only Department Admin roles can be assigned to Service Principals at this time.
-- Use the Billing Role Assignments APIs to assign a role to your Service Principal.-
-The following example shows how to call the Role Assignments API to grant a Service Principal access to your billing account. We recommend using [PostMan](https://postman.com) to do these one-time permission configurations.
-
-```json
-POST https://management.azure.com/providers/Microsoft.Billing/billingAccounts/{billingAccountName}/createBillingRoleAssignment?api-version=2019-10-01-preview
-```
-
-#### Request Body
-
-```json
-{
- "principalId": "00000000-0000-0000-0000-000000000000",
- "billingRoleDefinitionId": "/providers/Microsoft.Billing/billingAccounts/{billingAccountName}/providers/Microsoft.Billing/billingRoleDefinition/10000000-aaaa-bbbb-cccc-100000000000"
-}
-
-```
+To assign Service Principal permissions to your Enterprise Billing Account, Departments, or Enrollment Account scopes, see [Assign roles to Azure Enterprise Agreement service principal names](../manage/assign-roles-azure-service-principals.md).
### Azure role-based access control
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-performance.md
Title: Copy activity performance and scalability guide description: Learn about key factors that affect the performance of data movement in Azure Data Factory when you use the copy activity.+
+documentationcenter: ''
++ + Last updated 09/15/2020
ADF offers a serverless architecture that allows parallelism at different levels
This architecture allows you to develop pipelines that maximize data movement throughput for your environment. These pipelines fully utilize the following resources:
-* Network bandwidth
-* Storage input/output operations per second (IOPS) and bandwidth
+* Network bandwidth between the source and destination data stores
+* Source or destination data store input/output operations per second (IOPS) and bandwidth
This full utilization means you can estimate the overall throughput by measuring the minimum throughput available with the following resources:
This full utilization means you can estimate the overall throughput by measuring
* Destination data store * Network bandwidth in between the source and destination data stores
-The table below calculates the copy duration. The duration is based on data size and the bandwidth limit for your environment.
+The table below calculates the copy duration. The duration is based on data size and the network/data store bandwidth limit for your environment.
&nbsp;
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 03/04/2021 Last updated : 03/10/2021 # Data transformation expressions in mapping data flow
Maps each element of the array to a new element using the provided expression. M
* ``map([1, 2, 3, 4], #item + 2) -> [3, 4, 5, 6]`` * ``map(['a', 'b', 'c', 'd'], #item + '_processed') -> ['a_processed', 'b_processed', 'c_processed', 'd_processed']`` ___
+### <code>mapIf</code>
+<code><b>mapIf (<value1> : array, <value2> : binaryfunction, <value3>: binaryFunction) => any</b></code><br/><br/>
+Conditionally maps an array to another array of same or smaller length. The values can be of any datatype including structTypes. It takes a mapping function where you can address the item in the array as #item and current index as #index. For deeply nested maps you can refer to the parent maps using the ``#item_[n](#item_1, #index_1...)`` notation.
+* ``mapIf([10, 20, 30], #item > 10, #item + 5) -> [25, 35]``
+* ``mapIf(['icecream', 'cake', 'soda'], length(#item) > 4, upper(#item)) -> ['ICECREAM', 'CAKE']``
+___
### <code>mapIndex</code> <code><b>mapIndex(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => any</b></code><br/><br/> Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index. * ``mapIndex([1, 2, 3, 4], #item + 2 + #index) -> [4, 6, 8, 10]`` ___
+### <code>mapLoop</code>
+<code><b>mapLoop(<value1> : integer, <value2> : unaryfunction) => any</b></code><br/><br/>
+Loops through from 1 to length to create an array of that length. It takes a mapping function where you can address the index in the array as #index. For deeply nested maps you can refer to the parent maps using the #index_n(#index_1, #index_2...) notation.
+* ``mapLoop(3, #index * 10) -> [10, 20, 30]``
+___
### <code>reduce</code> <code><b>reduce(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : binaryfunction, <i>&lt;value4&gt;</i> : unaryfunction) => any</b></code><br/><br/> Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function.
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/policy-reference.md
Previously updated : 03/05/2021 Last updated : 03/10/2021 # Azure Policy built-in definitions for Data Factory (Preview)
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 03/05/2021 Last updated : 03/10/2021
databox-online Azure Stack Edge Gpu 2101 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2101-release-notes.md
Previously updated : 02/22/2021 Last updated : 03/08/2021
This article applies to the **Azure Stack Edge 2101** release, which maps to sof
The following new features are available in the Azure Stack Edge 2101 release. -- **General availability of Azure Stack Edge Pro R and Azure Stack Edge Mini R devices** - Starting with this release, Azure Stack Edge Pro R and Azure Stack Edge Mini R devices will be available. For more information, see [What is Azure Stack Edge Pro R](azure-stack-edge-j-series-overview.md) and [What is Azure Stack Edge Mini R](azure-stack-edge-k-series-overview.md).
+- **General availability of Azure Stack Edge Pro R and Azure Stack Edge Mini R devices** - Starting with this release, Azure Stack Edge Pro R and Azure Stack Edge Mini R devices will be available. For more information, see [What is Azure Stack Edge Pro R](azure-stack-edge-pro-r-overview.md) and [What is Azure Stack Edge Mini R](azure-stack-edge-mini-r-overview.md).
- **Cloud management of Virtual Machines** - Beginning this release, you can create and manage the virtual machines on your device via the Azure portal. For more information, see [Deploy VMs via the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md). - **Integration with Azure Monitor** - You can now use Azure Monitor to monitor containers from the compute applications that run on your device. The Azure Monitor metrics store is not supported in this release. For more information, see how to [Enable Azure Monitor on your device](azure-stack-edge-gpu-enable-azure-monitor.md). - **Edge container registry** - In this release, an Edge container registry is available that provides a repository at the edge on your device. You can use this registry to store and manage container images. For more information, see [Enable Edge container registry](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md).
databox-online Azure Stack Edge Gpu Configure Gpu Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-configure-gpu-modules.md
+
+ Title: Run a GPU module on Microsoft Azure Stack Edge Pro GPU device| Microsoft Docs
+description: Describes how to configure and run a module on GPU on an Azure Stack Edge Pro device via the Azure portal.
++++++ Last updated : 03/08/2021++
+# Configure and run a module on GPU on Azure Stack Edge Pro device
++
+Your Azure Stack Edge Pro device contains one or more Graphics Processing Unit (GPU). GPUs are a popular choice for AI computations as they offer parallel processing capabilities and are faster at image rendering than Central Processing Units (CPUs). For more information on the GPU contained in your Azure Stack Edge Pro device, go to [Azure Stack Edge Pro device technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md).
+
+This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for Nvidia T4 GPUs. This procedure can be used to configure any other modules published by Nvidia for these GPUs.
++
+## Prerequisites
+
+Before you begin, make sure that:
+
+1. You've access to a GPU enabled 1-node Azure Stack Edge Pro device. This device is activated with a resource in Azure.
+
+## Configure module to use GPU
+
+To configure a module to use the GPU on your Azure Stack Edge Pro device to run a module,<!--Can it be simplified? "To configure a module to be run by the GPU on your Azure Stack Edge Pro device,"?--> follow these steps.
+
+1. In the Azure portal, go to the resource associated with your device.
+
+2. In **Overview**, select **IoT Edge**.
+
+ ![Configure module to use GPU 1](media/azure-stack-edge-gpu-configure-gpu-modules/configure-compute-1.png)
+
+3. In **Enable IoT Edge service**, select **Add**.
+
+ ![Configure module to use GPU 2](media/azure-stack-edge-gpu-configure-gpu-modules/configure-compute-2.png)
+
+4. In **Create IoT Edge service**, enter settings for your IoT Hub resource:
+
+ |Field |Value |
+ |--||
+ |Subscription | Subscription used by the Azure Stack Edge resource. |
+ |Resource group | Resource group used by the Azure Stack Edge resource. |
+ |IoT Hub | Choose from **Create new** or **Use existing**. <br> By default, a Standard tier (S1) is used to create an IoT resource. To use a free tier IoT resource, create one and then select the existing resource. <br> In each case, the IoT Hub resource uses the same subscription and resource group that is used by the Azure Stack Edge resource. |
+ |Name | If you don't want to use the default name provided for a new IoT Hub resource, enter a different name. |
+
+ When you finish the settings, select **Review + Create**. Review the settings for your IoT Hub resource, and select **Create**.
+
+ ![Get started with compute 2](./media/azure-stack-edge-gpu-configure-gpu-modules/configure-compute-3.png)
+
+ Resource creation for an IoT Hub resource takes several minutes. After the resource is created, the **Overview** indicates the IoT Edge service is now running.
+
+ ![Get started with compute 3](./media/azure-stack-edge-gpu-configure-gpu-modules/configure-compute-4.png)
+
+5. To confirm the Edge compute role has been configured, select **Properties**.
+
+ ![Get started with compute 4](./media/azure-stack-edge-gpu-configure-gpu-modules/configure-compute-5.png)
+
+6. In **Properties**, select the link for **IoT Edge device**.
+
+ ![Configure module to use GPU 6](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-2.png)
+
+ In the right pane, you see the IoT Edge device associated with your Azure Stack Edge Pro device. This device corresponds to the IoT Edge device you created when creating the IoT Hub resource.
+
+7. Select this IoT Edge device.
+
+ ![Configure module to use GPU 7](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-3.png)
+
+8. Select **Set modules**.
+
+ ![Configure module to use GPU 8](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-4.png)
+
+9. Select **+ Add** and then select **+ IoT Edge module**.
+
+ ![Configure module to use GPU 9](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-5.png)
+
+10. In the **Add IoT Edge Module** tab:
+
+ 1. Provide the **Image URI**. You will use the publicly available Nvidia module **Digits** here.
+
+ 2. Set **Restart policy** to **always**.
+
+ 3. Set **Desired state** to **running**.
+
+ ![Configure module to use GPU 10](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-6.png)
+
+11. In the **Environment variables** tab, provide the Name of the variable and the corresponding value.
+
+ 1. To have the current module use one GPU on this device, use the NVIDIA_VISIBLE_DEVICES.
+
+ 2. Set the value to 0 or 1. A value of 0 or 1 ensures that at least one GPU is used by the device for this module. When you set the value to 0, 1, that implies that both the GPUs on your device are being used by this module.
+
+ ![Configure module to use GPU 11](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-7.png)
+
+ For more information on environment variables that you can use with the Nvidia GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
+
+ > [!NOTE]
+ > A GPU can only be mapped to one module. A module can however use one, both or no GPUs.
+
+12. Enter a name for your module. At this point you can choose to provide container create option and modify module twin settings or if done, select **Add**.
+
+ ![Configure module to use GPU 12](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-8.png)
+
+13. Make sure that the module is running and select **Review + Create**.
+
+ ![Configure module to use GPU 13](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-9.png)
+
+14. In the **Review + Create** tab, the deployment options that you selected are displayed. Review the options and select **Create**.
+
+ ![Configure module to use GPU 14](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-10.png)
+
+15. Make a note of the **runtime status** of the module.
+
+ ![Configure module to use GPU 15](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-11.png)
+
+ It takes a couple minutes for the module to be deployed. Select **Refresh** and you should see the **runtime status** update to **running**.
+
+ ![Configure module to use GPU 16](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-12.png)
++
+## Next steps
+
+- Learn more about [Environment variables that you can use with the Nvidia GPU](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
databox-online Azure Stack Edge Gpu Configure Tls Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-configure-tls-settings.md
+
+ Title: Configure TLS 1.2 on Windows clients accessing Azure Stack Edge Pro GPU device
+description: Describes how to configure TLS 1.2 on Windows clients accessing Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/22/2021+++
+# Configure TLS 1.2 on Windows clients accessing Azure Stack Edge Pro device
++
+If you are using a Windows client to access your Azure Stack Edge Pro device, you are required to configure TLS 1.2 on your client. This article provides resources and guidelines to configure TLS 1.2 on your Windows client.
+
+The guidelines provided here are based on testing performed on a client running Windows Server 2016.
+
+## Configure TLS 1.2 for current PowerShell session
+
+Do the following steps to configure TLS 1.2 on your client.
+
+1. Run PowerShell as administrator.
+2. To set TLS 1.2 for the current PowerShell session, type:
+
+ ```azurepowershell
+ $TLS12Protocol = [System.Net.SecurityProtocolType] 'Ssl3 , Tls12'
+ [System.Net.ServicePointManager]::SecurityProtocol = $TLS12Protocol
+ ```
+## Configure TLS 1.2 on client
+
+If you want to set system-wide TLS 1.2 for your environment, follow the guidelines in these documents:
+
+- [General- how to enable TLS 1.2](/windows-server/security/tls/tls-registry-settings#tls-12)
+- [How to enable TLS 1.2 on clients](/configmgr/core/plan-design/security/enable-tls-1-2-client)
+- [How to enable TLS 1.2 on the site servers and remote site systems](/configmgr/core/plan-design/security/enable-tls-1-2-server)
+- [Protocols in TLS/SSL (Schannel SSP)](/windows-server/security/tls/manage-tls#configuring-tls-ecc-curve-order)
+- [Cipher Suites](/windows-server/security/tls/tls-registry-settings#tls-12): Specifically [Configuring TLS Cipher Suite Order](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order)
+ Make sure that you list your current cipher suites and prepend any missing from the following list:
+
+ - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+ - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+ - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+
+ You can also add these cipher suites by directly editing the registry settings.
+
+ ```azurepowershell
+ New-ItemProperty -Path "$HklmSoftwarePath\Policies\Microsoft\Cryptography\Configuration\SSL\00010002" -Name "Functions" -PropertyType String -Value ("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384")
+ ```
+
+- How to set elliptical curves
+
+ Make sure that you list your current elliptical curves and prepend any missing from the following list:
+
+ - P-256
+ - P-384
+
+ You can also add these elliptical curves by directly editing the registry settings.
+
+ ```azurepowershell
+ New-ItemProperty -Path "$HklmSoftwarePath\Policies\Microsoft\Cryptography\Configuration\SSL\00010002" -Name "EccCurves" -PropertyType MultiString -Value @("NistP256", "NistP384")
+ ```
+
+ - [Set min RSA key exchange size to 2048](/windows-server/security/tls/tls-registry-settings#keyexchangealgorithmclient-rsa-key-sizes).
+++
+## Next steps
+
+[Connect to Azure Resource Manager](azure-stack-edge-j-series-connect-resource-manager.md)
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
+
+ Title: Connect to Azure Resource Manager on your Azure Stack Edge Pro GPU device
+description: Describes how to connect to the Azure Resource Manager running on your Azure Stack Edge Pro GPU using Azure PowerShell.
++++++ Last updated : 03/01/2021+
+#Customer intent: As an IT admin, I need to understand how to connect to Azure Resource Manager on my Azure Stack Edge Pro device so that I can manage resources.
++
+# Connect to Azure Resource Manager on your Azure Stack Edge Pro device
++
+Azure Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. The Azure Stack Edge Pro device supports the same Azure Resource Manager APIs to create, update, and delete VMs in a local subscription. This support lets you manage the device in a manner consistent with the cloud.
+
+This tutorial describes how to connect to the local APIs on your Azure Stack Edge Pro device via Azure Resource Manager using Azure PowerShell.
+
+## About Azure Resource Manager
+
+Azure Resource Manager provides a consistent management layer to call the Azure Stack Edge Pro device API and perform operations such as create, update, and delete VMs. The architecture of the Azure Resource Manager is detailed in the following diagram.
+
+![Diagram for Azure Resource Manager](media/azure-stack-edge-gpu-connect-resource-manager/edge-device-flow.svg)
++
+## Endpoints on Azure Stack Edge Pro device
+
+The following table summarizes the various endpoints exposed on your device, the supported protocols, and the ports to access those endpoints. Throughout the article, you will find references to these endpoints.
+
+| # | Endpoint | Supported protocols | Port used | Used for |
+| | | | | |
+| 1. | Azure Resource Manager | https | 443 | To connect to Azure Resource Manager for automation |
+| 2. | Security token service | https | 443 | To authenticate via access and refresh tokens |
+| 3. | Blob | https | 443 | To connect to Blob storage via REST |
++
+## Connecting to Azure Resource Manager workflow
+
+The process of connecting to local APIs of the device using Azure Resource Manager requires the following steps:
+
+| Step # | You'll do this step ... | .. on this location. |
+| | | |
+| 1. | [Configure your Azure Stack Edge Pro device](#step-1-configure-azure-stack-edge-pro-device) | Local web UI |
+| 2. | [Create and install certificates](#step-2-create-and-install-certificates) | Windows client/local web UI |
+| 3. | [Review and configure the prerequisites](#step-3-install-powershell-on-the-client) | Windows client |
+| 4. | [Set up Azure PowerShell on the client](#step-4-set-up-azure-powershell-on-the-client) | Windows client |
+| 5. | [Modify host file for endpoint name resolution](#step-5-modify-host-file-for-endpoint-name-resolution) | Windows client or DNS server |
+| 6. | [Check that the endpoint name is resolved](#step-6-verify-endpoint-name-resolution-on-the-client) | Windows client |
+| 7. | [Use Azure PowerShell cmdlets to verify connection to Azure Resource Manager](#step-7-set-azure-resource-manager-environment) | Windows client |
+
+The following sections detail each of the above steps in connecting to Azure Resource Manager.
+
+## Prerequisites
+
+Before you begin, make sure that the client used for connecting to device via Azure Resource Manager is using TLS 1.2. For more information, go to [Configure TLS 1.2 on Windows client accessing Azure Stack Edge Pro device](azure-stack-edge-gpu-configure-tls-settings.md).
+
+## Step 1: Configure Azure Stack Edge Pro device
+
+Take the following steps in the local web UI of your Azure Stack Edge Pro device.
+
+1. Complete the network settings for your Azure Stack Edge Pro device.
+
+ ![Local web UI "Network settings" page](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/compute-network-2.png)
++
+ Make a note of the device IP address. You will use this IP later.
+
+2. Configure the device name and the DNS domain from the **Device** page. Make a note of the device name and the DNS domain as you will use these later.
+
+ ![Local web UI "Device" page](./media/azure-stack-edge-gpu-deploy-set-up-device-update-time/device-2.png)
+
+ > [!IMPORTANT]
+ > The device name, DNS domain will be used to form the endpoints that are exposed.
+ > Use the Azure Resource Manager and Blob endpoints from the **Device** page in the local web UI.
++
+## Step 2: Create and install certificates
+
+Certificates ensure that your communication is trusted. On your Azure Stack Edge Pro device, self-signed appliance, blob, and Azure Resource Manager certificates are automatically generated. Optionally, you can bring in your own signed blob and Azure Resource Manager certificates as well.
+
+When you bring in a signed certificate of your own, you also need the corresponding signing chain of the certificate. For the signing chain, Azure Resource Manager, and the blob certificates on the device, you will need the corresponding certificates on the client machine also to authenticate and communicate with the device.
+
+To connect to Azure Resource Manager, you will need to create or get signing chain and endpoint certificates, import these certificates on your Windows client, and finally upload these certificates on the device.
+
+### Create certificates (Optional)
+
+For test and development use only, you can use Windows PowerShell to create certificates on your local system. While creating the certificates for the client, follow these guidelines:
+
+1. You first need to create a root certificate for the signing chain. For more information, see See steps to [Create signing chain certificates](azure-stack-edge-gpu-manage-certificates.md#create-signing-chain-certificate).
+
+2. You can next create the endpoint certificates for the blob and Azure Resource Manager. You can get these endpoints from the **Device** page in the local web UI. See the steps to [Create endpoint certificates](azure-stack-edge-gpu-manage-certificates.md#create-signed-endpoint-certificates).
+
+3. For all these certificates, make sure that the subject name and subject alternate name conform to the following guidelines:
+
+ |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example |
+ |||||
+ |Azure Resource Manager|`management.<Device name>.<Dns Domain>`|`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`|`management.mydevice1.microsoftdatabox.com` |
+ |Blob storage|`*.blob.<Device name>.<Dns Domain>`|`*.blob.< Device name>.<Dns Domain>`|`*.blob.mydevice1.microsoftdatabox.com` |
+ |Multi-SAN single certificate for both endpoints|`<Device name>.<dnsdomain>`|`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`<br>`*.blob.<Device name>.<Dns Domain>`|`mydevice1.microsoftdatabox.com` |
+
+For more information on certificates, go to how to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
+
+### Upload certificates on the device
+
+The certificates that you created in the previous step will be in the Personal store on your client. These certificates need to be exported on your client into appropriate format files that can then be uploaded to your device.
+
+1. The root certificate must be exported as a DER format file with *.cer* file extension. For detailed steps, see [Export certificates as a .cer format file](azure-stack-edge-gpu-manage-certificates.md#export-certificates-as-der-format).
+
+2. The endpoint certificates must be exported as *.pfx* files with private keys. For detailed steps, see [Export certificates as .pfx file with private keys](azure-stack-edge-gpu-manage-certificates.md#export-certificates-as-pfx-format-with-private-key).
+
+3. The root and endpoint certificates are then uploaded on the device using the **+Add certificate** option on the **Certificates** page in the local web UI. To upload the certificates, follow the steps in [Upload certificates](azure-stack-edge-gpu-manage-certificates.md#upload-certificates).
++
+### Import certificates on the client running Azure PowerShell
+
+The Windows client where you will invoke the Azure Resource Manager APIs needs to establish trust with the device. To this end, the certificates that you created in the previous step must be imported on your Windows client into the appropriate certificate store.
+
+1. The root certificate that you exported as the DER format with *.cer* extension should now be imported in the Trusted Root Certificate Authorities on your client system. For detailed steps, see [Import certificates into the Trusted Root Certificate Authorities store.](azure-stack-edge-gpu-manage-certificates.md#import-certificates-as-der-format)
+
+2. The endpoint certificates that you exported as the *.pfx* must be exported as *.cer*. This *.cer* is then imported in the **Personal** certificate store on your system. For detailed steps, see [Import certificates into personal store](azure-stack-edge-gpu-manage-certificates.md#import-certificates-as-der-format).
+
+## Step 3: Install PowerShell on the client
+
+Your Windows client must meet the following prerequisites:
+
+1. Run PowerShell Version 5.0. You must have PowerShell version 5.0. PowerShell core is not supported. To check the version of PowerShell on your system, run the following cmdlet:
+
+ ```powershell
+ $PSVersionTable.PSVersion
+ ```
+
+ Compare the **Major** version and ensure that it is 5.0 or later.
+
+ If you have an outdated version, see [Upgrading existing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-6&preserve-view=true#upgrading-existing-windows-powershell).
+
+ If you don\'t have PowerShell 5.0, follow [Installing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-6&preserve-view=true).
+
+ A sample output is shown below.
+
+ ```powershell
+ Windows PowerShell
+ Copyright (C) Microsoft Corporation. All rights reserved.
+ Try the new cross-platform PowerShell https://aka.ms/pscore6
+ PS C:\windows\system32> $PSVersionTable.PSVersion
+ Major Minor Build Revision
+ -- -- -- --
+ 5 1 18362 145
+ ```
+
+2. You can access the PowerShell Gallery.
+
+ Run PowerShell as administrator. Verify if the `PSGallery` is registered as a repository.
+
+ ```powershell
+ Import-Module -Name PowerShellGet -ErrorAction Stop
+ Import-Module -Name PackageManagement -ErrorAction Stop
+ Get-PSRepository -Name "PSGallery"
+ ```
+
+ A sample output is shown below.
+
+ ```powershell
+ PS C:\windows\system32> Import-Module -Name PowerShellGet -ErrorAction Stop
+ PS C:\windows\system32> Import-Module -Name PackageManagement -ErrorAction Stop
+ PS C:\windows\system32> Get-PSRepository -Name "PSGallery"
+ Name InstallationPolicy SourceLocation
+ - --
+ PSGallery Trusted https://www.powershellgallery.com/api/v2
+ ```
+
+If your repository is not trusted or you need more information, see [Validate the PowerShell Gallery accessibility](/azure-stack/operator/azure-stack-powershell-install?view=azs-1908&preserve-view=true&preserve-view=true#2-validate-the-powershell-gallery-accessibility).
+
+## Step 4: Set up Azure PowerShell on the client
+
+<!--1. Verify the API profile of the client and identify which version of the Azure PowerShell modules and libraries to include on your client. In this example, the client system will be running Azure Stack 1904 or later. For more information, see [Azure Resource Manager API profiles](/azure-stack/user/azure-stack-version-profiles?view=azs-1908#azure-resource-manager-api-profiles).-->
+
+1. You will install Azure PowerShell modules on your client that will work with your device.
+
+ a. Run PowerShell as an administrator. You need access to PowerShell gallery.
++
+ b. To install the required Azure PowerShell modules from the PowerShell Gallery, run the following command:
+
+ ```powershell
+ # Install the AzureRM.BootStrapper module. Select Yes when prompted to install NuGet.
+
+ Install-Module -Name AzureRM.BootStrapper
+
+ # Install and import the API Version Profile into the current PowerShell session.
+
+ Use-AzureRmProfile -Profile 2019-03-01-hybrid -Force
+
+ # Confirm the installation of PowerShell
+ Get-Module -Name "Azure*" -ListAvailable
+ ```
+
+ Make sure that you have Azure-RM module version 2.5.0 running at the end of the installation.
+ If you have an existing version of Azure-RM module that does not match the required version, uninstall using the following command:
+
+ `Get-Module -Name Azure* -ListAvailable | Uninstall-Module -Force -Verbose`
+
+ You will now need to install the required version again.
+
+
+A sample output is shown below that indicates the AzureRM version 2.5.0 modules were installed successfully.
+
+```powershell
+PS C:\windows\system32> Install-Module -Name AzureRM.BootStrapper
+PS C:\windows\system32> Use-AzureRmProfile -Profile 2019-03-01-hybrid -Force
+Loading Profile 2019-03-01-hybrid
+PS C:\windows\system32> Get-Module -Name "Azure*" -ListAvailable
+
+ Directory: C:\Program Files\WindowsPowerShell\Modules
+
+ModuleType Version Name ExportedCommands
+- - - -
+Script 4.5.0 Azure.Storage {Get-AzureStorageTable, New-AzureStorageTableSASToken, New...
+Script 2.5.0 AzureRM
+Script 0.5.0 AzureRM.BootStrapper {Update-AzureRmProfile, Uninstall-AzureRmProfile, Install-...
+Script 4.6.1 AzureRM.Compute {Remove-AzureRmAvailabilitySet, Get-AzureRmAvailabilitySet...
+Script 3.5.1 AzureRM.Dns {Get-AzureRmDnsRecordSet, New-AzureRmDnsRecordConfig, Remo...
+Script 5.1.5 AzureRM.Insights {Get-AzureRmMetricDefinition, Get-AzureRmMetric, Remove-Az...
+Script 4.2.0 AzureRM.KeyVault {Add-AzureKeyVaultCertificate, Set-AzureKeyVaultCertificat...
+Script 5.0.1 AzureRM.Network {Add-AzureRmApplicationGatewayAuthenticationCertificate, G...
+Script 5.8.3 AzureRM.profile {Disable-AzureRmDataCollection, Disable-AzureRmContextAuto...
+Script 6.4.3 AzureRM.Resources {Get-AzureRmProviderOperation, Remove-AzureRmRoleAssignmen...
+Script 5.0.4 AzureRM.Storage {Get-AzureRmStorageAccount, Get-AzureRmStorageAccountKey, ...
+Script 4.0.2 AzureRM.Tags {Remove-AzureRmTag, Get-AzureRmTag, New-AzureRmTag}
+Script 4.0.3 AzureRM.UsageAggregates Get-UsageAggregates
+Script 5.0.1 AzureRM.Websites {Get-AzureRmAppServicePlan, Set-AzureRmAppServicePlan, New...
+
+
+ Directory: C:\Program Files (x86)\Microsoft Azure Information Protection\Powershell
+
+ModuleType Version Name ExportedCommands
+- - - -
+Binary 1.48.204.0 AzureInformationProtection {Clear-RMSAuthentication, Get-RMSFileStatus, Get-RMSServer...
+```
++
+## Step 5: Modify host file for endpoint name resolution
+
+You will now add the Azure consistent VIP that you defined on the local web UI of device to:
+
+- The host file on the client, OR,
+- The DNS server configuration
+
+> [!IMPORTANT]
+> We recommend that you modify the the DNS server configuration for endpoint name resolution.
+
+On your Windows client that you are using to connect to the device, take the following steps:
+
+1. Start **Notepad** as an administrator, and then open the **hosts** file located at C:\Windows\System32\Drivers\etc.
+
+ ![Windows Explorer hosts file](media/azure-stack-edge-gpu-connect-resource-manager/hosts-file.png)
+
+2. Add the following entries to your **hosts** file replacing with appropriate values for your device:
+
+ ```
+ <Device IP> login.<appliance name>.<DNS domain>
+ <Device IP> management.<appliance name>.<DNS domain>
+ <Device IP> <storage name>.blob.<appliance name>.<DNS domain>
+ ```
+
+ > [!IMPORTANT]
+ > The entry in the hosts file should match exactly that provided to connect to Azure Resource Manager at a later step. Make sure that the DNS Domain entry here is all in the lowercase.
+
+ You saved the device IP from the local web UI in an earlier step.
+
+ The login.\<appliance name\>.\<DNS domain\> entry is the endpoint for Security Token Service (STS). STS is responsible for creation, validation, renewal, and cancellation of security tokens. The security token service is used to create the access token and refresh token that are used for continuous communication between the device and the client.
+
+3. For reference, use the following image. Save the **hosts** file.
+
+ ![hosts file in Notepad](media/azure-stack-edge-gpu-connect-resource-manager/hosts-file-notepad.png)
+
+## Step 6: Verify endpoint name resolution on the client
+
+Check if the endpoint name is resolved on the client that you are using to connect to the Azure consistent VIP.
+
+1. You can use the ping.exe command-line utility to check that the endpoint name is resolved. Given an IP address, the ping command will return the TCP/IP host name of the computer you\'re tracing.
+
+ Add the `-a` switch to the command line as shown in the example below. If the host name is returnable, it will also return this potentially valuable information in the reply.
+
+ ![Ping in command prompt](media/azure-stack-edge-gpu-connect-resource-manager/ping-command-prompt.png)
+++
+## Step 7: Set Azure Resource Manager environment
+
+Set the Azure Resource Manager environment and verify that your device to client communication via Azure Resource Manager is working fine. Take the following steps for this verification:
++
+1. Use the `Add-AzureRmEnvironment` cmdlet to further ensure that the communication via Azure Resource Manager is working properly and the API calls are going through the port dedicated for Azure Resource Manager - 443.
+
+ The `Add-AzureRmEnvironment` cmdlet adds endpoints and metadata to enable Azure Resource Manager cmdlets to connect with a new instance of Azure Resource Manager.
++
+ > [!IMPORTANT]
+ > The Azure Resource Manager endpoint URL that you provide in the following cmdlet is case-sensitive. Make sure the endpoint URL is all in lowercase and matches what you provided in the hosts file. If the case doesn't match, then you will see an error.
+
+ ```powershell
+ Add-AzureRmEnvironment -Name <Environment Name> -ARMEndpoint "https://management.<appliance name>.<DNSDomain>/"
+ ```
+
+ A sample output is shown below:
+
+ ```powershell
+ PS C:\windows\system32> Add-AzureRmEnvironment -Name AzDBE -ARMEndpoint https://management.dbe-n6hugc2ra.microsoftdatabox.com/
+
+ Name Resource Manager Url ActiveDirectory Authority
+ - -- -
+ AzDBE https://management.dbe-n6hugc2ra.microsoftdatabox.com https://login.dbe-n6hugc2ra.microsoftdatabox.com/adfs/
+ ```
+
+2. Set the environment as Azure Stack Edge Pro and the port to be used for Azure Resource Manager calls as 443. You define the environment in two ways:
+
+ - Set the environment. Type the following command:
+
+ ```powershell
+ Set-AzureRMEnvironment -Name <Environment Name>
+ ```
+
+ For more information, go to [Set-AzureRMEnvironment](/powershell/module/azurerm.profile/set-azurermenvironment?view=azurermps-6.13.0&preserve-view=true).
+
+ - Define the environment inline for every cmdlet that you execute. This ensures that all the API calls are going through the correct environment. By default, the calls would go through the Azure public but you want these to go through the environment that you set for Azure Stack Edge Pro device.
+
+ - See more information on [how to switch AzureRM environments](#switch-environments).
+
+2. Call local device APIs to authenticate the connections to Azure Resource Manager.
+
+ 1. These credentials are for a local machine account and are solely used for API access.
+
+ 2. You can connect via `login-AzureRMAccount` or via `Connect-AzureRMAccount` command.
+
+ 1. To sign in, type the following command. The tenant ID in this instance is hard coded - c0257de7-538f-415c-993a-1b87a031879d. Use the following username and password.
+
+ - **Username** - *EdgeArmUser*
+
+ - **Password** - [Set the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
+
+ ```powershell
+ PS C:\windows\system32> $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force;
+ PS C:\windows\system32> $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass)
+ PS C:\windows\system32> Connect-AzureRmAccount -EnvironmentName AzDBE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred
+
+ Account SubscriptionName TenantId Environment
+ - - -- --
+ EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE
+
+ PS C:\windows\system32>
+ ```
+
+
+ An alternative way to log in is to use the `login-AzureRmAccount` cmdlet.
+
+ `login-AzureRMAccount -EnvironmentName <Environment Name>` -TenantId c0257de7-538f-415c-993a-1b87a031879d
+
+ Here is a sample output of the command.
+
+ ```powershell
+ PS C:\Users\Administrator> login-AzureRMAccount -EnvironmentName AzDBE -TenantId c0257de7-538f-415c-993a-1b87a031879d
+
+ Account SubscriptionName TenantId Environment
+ - - -- -
+ EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE
+ PS C:\Users\Administrator>
+ ```
+++
+> [!IMPORTANT]
+> The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge Pro device restarts. If this happens, any cmdlets that you execute, will return error messages to the effect that you are not connected to Azure anymore. You will need to sign in again.
+
+## Switch environments
+
+Run `Disconnect-AzureRmAccount` command to switch to a different `AzureRmEnvironment`.
+
+If you use `Set-AzureRmEnvironment` and `Login-AzureRmAccount` without using `Disconnect-AzureRmAccount`, the environment is not actually switched.
+
+The following examples show how to switch between two environments, `AzDBE1` and `AzDBE2`.
+
+First, list all the existing environments on your client.
++
+```azurepowershell
+PS C:\WINDOWS\system32> Get-AzureRmEnvironmentΓÇï
+Name Resource Manager Url ActiveDirectory AuthorityΓÇï
+- -- -ΓÇï
+AzureChinaCloud https://management.chinacloudapi.cn/ https://login.chinacloudapi.cn/ΓÇï
+AzureCloud https://management.azure.com/ https://login.microsoftonline.com/ΓÇï
+AzureGermanCloud https://management.microsoftazure.de/ https://login.microsoftonline.de/ΓÇï
+AzDBE1 https://management.HVTG1T2-Test.microsoftdatabox.com https://login.hvtg1t2-test.microsoftdatabox.com/adfs/ΓÇï
+AzureUSGovernment https://management.usgovcloudapi.net/ https://login.microsoftonline.us/ΓÇï
+AzDBE2 https://management.CVV4PX2-Test.microsoftdatabox.com https://login.cvv4px2-test.microsoftdatabox.com/adfs/ΓÇï
+```
+ΓÇï
+Next, get which environment you are currently connected to via your Azure Resource Manager.
+
+```azurepowershell
+PS C:\WINDOWS\system32> Get-AzureRmContext |fl *ΓÇï
+ΓÇïΓÇï
+Name : Default Provider Subscription (A4257FDE-B946-4E01-ADE7-674760B8D1A3) - EdgeArmUser@localhostΓÇï
+Account : EdgeArmUser@localhostΓÇï
+Environment : AzDBE2ΓÇï
+Subscription : a4257fde-b946-4e01-ade7-674760b8d1a3ΓÇï
+Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï
+TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï
+VersionProfile :ΓÇï
+ExtendedProperties : {}ΓÇï
+```
+
+You should now disconnect from the current environment before you switch to the other environment.ΓÇï
+ΓÇï
+ΓÇï
+```azurepowershell
+PS C:\WINDOWS\system32> Disconnect-AzureRmAccountΓÇï
+ΓÇïΓÇï
+Id : EdgeArmUser@localhostΓÇï
+Type : UserΓÇï
+Tenants : {c0257de7-538f-415c-993a-1b87a031879d}ΓÇï
+AccessToken :ΓÇï
+Credential :ΓÇï
+TenantMap : {}ΓÇï
+CertificateThumbprint :ΓÇï
+ExtendedProperties : {[Subscriptions, A4257FDE-B946-4E01-ADE7-674760B8D1A3], [Tenants, c0257de7-538f-415c-993a-1b87a031879d]}
+```
+
+Log into the other environment. The sample output is shown below.
+
+```azurepowershell
+PS C:\WINDOWS\system32> Login-AzureRmAccount -Environment "AzDBE1" -TenantId $ArmTenantIdΓÇï
+ΓÇï
+Account SubscriptionName TenantId EnvironmentΓÇï
+- - -- --ΓÇï
+EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE1
+```
+ΓÇï
+Run this cmdlet to confirm which environment you are connected to.
+
+```azurepowershell
+PS C:\WINDOWS\system32> Get-AzureRmContext |fl *ΓÇï
+ΓÇïΓÇï
+Name : Default Provider Subscription (A4257FDE-B946-4E01-ADE7-674760B8D1A3) - EdgeArmUser@localhostΓÇï
+Account : EdgeArmUser@localhostΓÇï
+Environment : AzDBE1ΓÇï
+Subscription : a4257fde-b946-4e01-ade7-674760b8d1a3ΓÇï
+Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï
+TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï
+VersionProfile :ΓÇï
+ExtendedProperties : {}
+```
+ΓÇïYou have now switched to the intended environment.
+
+## Next steps
+
+[Deploy VMs on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
databox-online Azure Stack Edge Gpu Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-iot-edge-module.md
+
+ Title: C# IoT Edge module for Azure Stack Edge Pro with GPU | Microsoft Docs
+description: Learn how to develop a C# IoT Edge module that can be deployed on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/05/2021+++
+# Develop a C# IoT Edge module to move files on Azure Stack Edge Pro
++
+This article steps you through how to create an IoT Edge module for deployment with your Azure Stack Edge Pro device. Azure Stack Edge Pro is a storage solution that allows you to process data and send it over network to Azure.
+
+You can use Azure IoT Edge modules with your Azure Stack Edge Pro to transform the data as it moved to Azure. The module used in this article implements the logic to copy a file from a local share to a cloud share on your Azure Stack Edge Pro device.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a container registry to store and manage your modules (Docker images).
+> * Create an IoT Edge module to deploy on your Azure Stack Edge Pro device.
++
+## About the IoT Edge module
+
+Your Azure Stack Edge Pro device can deploy and run IoT Edge modules. Edge modules are essentially Docker containers that perform a specific task, such as ingest a message from a device, transform a message, or send a message to an IoT Hub. In this article, you will create a module that copies files from a local share to a cloud share on your Azure Stack Edge Pro device.
+
+1. Files are written to the local share on your Azure Stack Edge Pro device.
+2. The file event generator creates a file event for each file written to the local share. The file events are also generated when a file is modified. The file events are then sent to IoT Edge Hub (in IoT Edge runtime).
+3. The IoT Edge custom module processes the file event to create a file event object that also contains a relative path for the file. The module generates an absolute path using the relative file path and copies the file from the local share to the cloud share. The module then deletes the file from the local share.
+
+![How Azure IoT Edge module works on Azure Stack Edge Pro](./media/azure-stack-edge-gpu-create-iot-edge-module/how-module-works-1.png)
+
+Once the file is in the cloud share, it automatically gets uploaded to your Azure Storage account.
+
+## Prerequisites
+
+Before you begin, make sure you have:
+
+- An Azure Stack Edge Pro device that is running.
+
+ - The device also has an associated IoT Hub resource.
+ - The device has Edge compute role configured.
+ For more information, go to [Configure compute](azure-stack-edge-j-series-deploy-configure-compute.md#configure-compute) for your Azure Stack Edge Pro.<!--Update link?-->
+
+- The following development resources:
+
+ - [Visual Studio Code](https://code.visualstudio.com/).
+ - [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp).
+ - [Azure IoT Edge extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
+ - [.NET Core 2.1 SDK](https://www.microsoft.com/net/download).
+ - [Docker CE](https://store.docker.com/editions/community/docker-ce-desktop-windows). You may have to create an account to download and install the software.
+
+## Create a container registry
+
+An Azure container registry is a private Docker registry in Azure where you can store and manage your private Docker container images. The two popular Docker registry services available in the cloud are Azure Container Registry and Docker Hub. This article uses the Container Registry.
+
+1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+2. Select **Create a resource > Containers > Container Registry**. Click **Create**.
+3. Provide:
+
+ 1. A unique **Registry name** within Azure that contains 5 to 50 alphanumeric characters.
+ 2. Choose a **Subscription**.
+ 3. Create new or choose an existing **Resource group**.
+ 4. Select a **Location**. We recommend that this location be the same as that is associated with the Azure Stack Edge resource.
+ 5. Toggle **Admin user** to **Enable**.
+ 6. Set the SKU to **Basic**.
+
+ ![Create container registry](./media/azure-stack-edge-gpu-create-iot-edge-module/create-container-registry-1.png)
+
+4. Select **Create**.
+5. After your container registry is created, browse to it, and select **Access keys**.
+
+ ![Get Access keys](./media/azure-stack-edge-gpu-create-iot-edge-module/get-access-keys-1.png)
+
+6. Copy the values for **Login server**, **Username**, and **Password**. You use these values later to publish the Docker image to your registry and to add the registry credentials to the Azure IoT Edge runtime.
++
+## Create an IoT Edge module project
+
+The following steps create an IoT Edge module project based on the .NET Core 2.1 SDK. The project uses Visual Studio Code and the Azure IoT Edge extension.
+
+### Create a new solution
+
+Create a C# solution template that you can customize with your own code.
+
+1. In Visual Studio Code, select **View > Command Palette** to open the VS Code command palette.
+2. In the command palette, enter and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you're already signed in, you can skip this step.
+3. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. In the command palette, provide the following information to create your solution:
+
+ 1. Select the folder where you want to create the solution.
+ 2. Provide a name for your solution or accept the default **EdgeSolution**.
+
+ ![Create new solution 1](./media/azure-stack-edge-gpu-create-iot-edge-module/create-new-solution-1.png)
+
+ 3. Choose **C# Module** as the module template.
+ 4. Replace the default module name with the name you want to assign, in this case, it is **FileCopyModule**.
+
+ ![Create new solution 2](./media/azure-stack-edge-gpu-create-iot-edge-module/create-new-solution-2.png)
+
+ 5. Specify the container registry that you created in the previous section as the image repository for your first module. Replace **localhost:5000** with the login server value that you copied.
+
+ The final string looks like `<Login server name>/<Module name>`. In this example, the string is: `mycontreg2.azurecr.io/filecopymodule`.
+
+ ![Create new solution 3](./media/azure-stack-edge-gpu-create-iot-edge-module/create-new-solution-3.png)
+
+4. Go to **File > Open Folder**.
+
+ ![Create new solution 4](./media/azure-stack-edge-gpu-create-iot-edge-module/create-new-solution-4.png)
+
+5. Browse and point to the **EdgeSolution** folder that you created earlier. The VS Code window loads your IoT Edge solution workspace with its five top-level components. You won't edit the `.vscode` folder, **.gitignore** file, **.env** file, and the deployment.template.json** in this article.
+
+ The only component that you modify is the modules folder. This folder has the C# code for your module and Docker files to build your module as a container image.
+
+ ![Create new solution 5](./media/azure-stack-edge-gpu-create-iot-edge-module/create-new-solution-5.png)
+
+### Update the module with custom code
+
+1. In the VS Code explorer, open **modules > FileCopyModule > Program.cs**.
+2. At the top of the **FileCopyModule namespace**, add the following using statements for types that are used later. **Microsoft.Azure.Devices.Client.Transport.Mqtt** is a protocol to send messages to IoT Edge Hub.
+
+ ```
+ namespace FileCopyModule
+ {
+ using Microsoft.Azure.Devices.Client.Transport.Mqtt;
+ using Newtonsoft.Json;
+ ```
+3. Add the **InputFolderPath** and **OutputFolderPath** variable to the Program class.
+
+ ```
+ class Program
+ {
+ static int counter;
+ private const string InputFolderPath = "/home/input";
+ private const string OutputFolderPath = "/home/output";
+ ```
+
+4. Immediately after the previous step, add the **FileEvent** class to define the message body.
+
+ ```
+ /// <summary>
+ /// The FileEvent class defines the body of incoming messages.
+ /// </summary>
+ private class FileEvent
+ {
+ public string ChangeType { get; set; }
+
+ public string ShareRelativeFilePath { get; set; }
+
+ public string ShareName { get; set; }
+ }
+ ```
+
+5. In the **Init method**, the code creates and configures a **ModuleClient** object. This object allows the module to connect to the local Azure IoT Edge runtime using MQTT protocol to send and receive messages. The connection string that's used in the Init method is supplied to the module by the IoT Edge runtime. The code registers a FileCopy callback to receive messages from an IoT Edge hub via the **input1** endpoint. Replace the **Init method** with the following code.
+
+ ```
+ /// <summary>
+ /// Initializes the ModuleClient and sets up the callback to receive
+ /// messages containing file event information
+ /// </summary>
+ static async Task Init()
+ {
+ MqttTransportSettings mqttSetting = new MqttTransportSettings(TransportType.Mqtt_Tcp_Only);
+ ITransportSettings[] settings = { mqttSetting };
+
+ // Open a connection to the IoT Edge runtime
+ ModuleClient ioTHubModuleClient = await ModuleClient.CreateFromEnvironmentAsync(settings);
+ await ioTHubModuleClient.OpenAsync();
+ Console.WriteLine("IoT Hub module client initialized.");
+
+ // Register callback to be called when a message is received by the module
+ await ioTHubModuleClient.SetInputMessageHandlerAsync("input1", FileCopy, ioTHubModuleClient);
+ }
+ ```
+
+6. Remove the code for **PipeMessage method** and in its place, insert the code for **FileCopy**.
+
+ ```
+ /// <summary>
+ /// This method is called whenever the module is sent a message from the IoT Edge Hub.
+ /// This method deserializes the file event, extracts the corresponding relative file path, and creates the absolute input file path using the relative file path and the InputFolderPath.
+ /// This method also forms the absolute output file path using the relative file path and the OutputFolderPath. It then copies the input file to output file and deletes the input file after the copy is complete.
+ /// </summary>
+ static async Task<MessageResponse> FileCopy(Message message, object userContext)
+ {
+ int counterValue = Interlocked.Increment(ref counter);
+
+ try
+ {
+ byte[] messageBytes = message.GetBytes();
+ string messageString = Encoding.UTF8.GetString(messageBytes);
+ Console.WriteLine($"Received message: {counterValue}, Body: [{messageString}]");
+
+ if (!string.IsNullOrEmpty(messageString))
+ {
+ var fileEvent = JsonConvert.DeserializeObject<FileEvent>(messageString);
+
+ string relativeFileName = fileEvent.ShareRelativeFilePath.Replace("\\", "/");
+ string inputFilePath = InputFolderPath + relativeFileName;
+ string outputFilePath = OutputFolderPath + relativeFileName;
+
+ if (File.Exists(inputFilePath))
+ {
+ Console.WriteLine($"Moving input file: {inputFilePath} to output file: {outputFilePath}");
+ var outputDir = Path.GetDirectoryName(outputFilePath);
+ if (!Directory.Exists(outputDir))
+ {
+ Directory.CreateDirectory(outputDir);
+ }
+
+ File.Copy(inputFilePath, outputFilePath, true);
+ Console.WriteLine($"Copied input file: {inputFilePath} to output file: {outputFilePath}");
+ File.Delete(inputFilePath);
+ Console.WriteLine($"Deleted input file: {inputFilePath}");
+ }
+ else
+ {
+ Console.WriteLine($"Skipping this event as input file doesn't exist: {inputFilePath}");
+ }
+ }
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine("Caught exception: {0}", ex.Message);
+ Console.WriteLine(ex.StackTrace);
+ }
+
+ Console.WriteLine($"Processed event.");
+ return MessageResponse.Completed;
+ }
+ ```
+
+7. Save this file.
+8. You can also [download an existing code sample](https://azure.microsoft.com/resources/samples/data-box-edge-csharp-modules/?cdn=disable) for this project. You can then validate the file that you saved against the **program.cs** file in this sample.
+
+## Build your IoT Edge solution
+
+In the previous section, you created an IoT Edge solution and added code to the FileCopyModule to copy files from local share to the cloud share. Now you need to build the solution as a container image and push it to your container registry.
+
+1. In VSCode, go to Terminal > New Terminal to open a new Visual Studio Code integrated terminal.
+2. Sign in to Docker by entering the following command in the integrated terminal.
+
+ `docker login <ACR login server> -u <ACR username>`
+
+ Use the login server and username that you copied from your container registry.
+
+ ![Build and push IoT Edge solution](./media/azure-stack-edge-gpu-create-iot-edge-module/build-iot-edge-solution-1.png)
+
+2. When prompted for password, supply the password. You can also retrieve the values for login server, username, and password from the **Access Keys** in your container registry in the Azure portal.
+
+3. Once the credentials are supplied, you can push your module image to your Azure container registry. In the VS Code Explorer, right-click the **module.json** file and select **Build and Push IoT Edge solution**.
+
+ ![Build and push IoT Edge solution 2](./media/azure-stack-edge-gpu-create-iot-edge-module/build-iot-edge-solution-2.png)
+
+ When you tell Visual Studio Code to build your solution, it runs two commands in the integrated terminal: docker build and docker push. These two commands build your code, containerize the CSharpModule.dll, and then push the code to the container registry that you specified when you initialized the solution.
+
+ You will be prompted to choose the module platform. Select *amd64* corresponding to Linux.
+
+ ![Select platform](./media/azure-stack-edge-gpu-create-iot-edge-module/select-platform.png)
+
+ > [!IMPORTANT]
+ > Only the Linux modules are supported.
+
+ You may see the following warning that you can ignore:
+
+ *Program.cs(77,44): warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread.*
+
+4. You can see the full container image address with tag in the VS Code integrated terminal. The image address is built from information that's in the module.json file with the format `<repository>:<version>-<platform>`. For this article, it should look like `mycontreg2.azurecr.io/filecopymodule:0.0.1-amd64`.
+
+## Next steps
+
+To deploy and run this module on Azure Stack Edge Pro, see the steps in [Add a module](azure-stack-edge-j-series-deploy-configure-compute.md#add-a-module).<!--Update link?-->
databox-online Azure Stack Edge Gpu Create Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-kubernetes-cluster.md
Previously updated : 02/22/2021 Last updated : 03/08/2021 # Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device
In this approach, you create a namespace and a user. You then associate the user
4. The config file should live in the `.kube` folder of your user profile on the local machine. Copy the file to that folder in your user profile.
- ![Location of config file on client](media/azure-stack-edge-j-series-create-kubernetes-cluster/location-config-file.png)
+ ![Location of config file on client](media/azure-stack-edge-gpu-create-kubernetes-cluster/location-config-file.png)
5. Associate the namespace with the user you created. Type:
You can now deploy your applications in the namespace, then view those applicati
To remove the Kubernetes cluster, you will need to remove the IoT Edge configuration.
-For detailed instructions, go to [Remove IoT Edge configuration](azure-stack-edge-j-series-manage-compute.md#remove-iot-edge-service).
+For detailed instructions, go to [Manage IoT Edge configuration](azure-stack-edge-gpu-manage-compute.md#manage-iot-edge-configuration).
## Next steps -- [Deploy a stateless application on your Azure Stack Edge Pro](azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md).
+- [Deploy a stateless application on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-stateless-application-kubernetes.md).
databox-online Azure Stack Edge Gpu Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-add-shares.md
+
+ Title: Tutorial to transfer data to shares with Azure Stack Edge Pro GPU | Microsoft Docs
+description: Learn how to add and connect to shares on Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/22/2021+
+Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro so I can use it to transfer data to Azure.
+
+# Tutorial: Transfer data via shares with Azure Stack Edge Pro GPU
++
+This tutorial describes how to add and connect to shares on your Azure Stack Edge Pro device. After you've added the shares, Azure Stack Edge Pro can transfer data to Azure.
+
+This procedure can take around 10 minutes to complete.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Add a share
+> * Connect to the share
+
+## Prerequisites
+
+Before you add shares to Azure Stack Edge Pro, make sure that:
+
+* You've installed your physical device as described in [Install Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-install.md).
+
+* You've activated the physical device as described in [Activate your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
+
+## Add a share
+
+To create a share, do the following procedure:
+
+1. In the [Azure portal](https://portal.azure.com/), select your Azure Stack Edge resource and then go to the **Overview**. Your device should be online. Select **Cloud storage gateway**.
+
+ ![Device online](./media/azure-stack-edge-gpu-deploy-add-shares/device-online-1.png)
+
+2. Select **+ Add share** on the device command bar.
+
+ ![Add a share](./media/azure-stack-edge-gpu-deploy-add-shares/select-add-share-1.png)
+
+3. In the **Add share** pane, follow these steps:
+
+ a. In the **Name** box, provide a unique name for your share.
+ The share name can have only letters, numerals, and hyphens. It must have between 3 to 63 characters and begin with a letter or a numeral. Hyphens must be preceded and followed by a letter or a numeral.
+
+ b. Select a **Type** for the share.
+ The type can be **SMB** or **NFS**, with SMB being the default. SMB is the standard for Windows clients, and NFS is used for Linux clients.
+ Depending upon whether you choose SMB or NFS shares, the rest of the options vary slightly.
+
+ c. Provide a storage account where the share will reside.
+
+ d. In the **Storage service** drop-down list, select **Block Blob**, **Page Blob**, or **Files**.
+ The type of service you select depends on which format you want the data to use in Azure. In this example, because we want to store the data as block blobs in Azure, we select **Block Blob**. If you select **Page Blob**, make sure that your data is 512 bytes aligned. For example, a VHDX is always 512 bytes aligned.
+
+ > [!IMPORTANT]
+ > Make sure that the Azure Storage account that you use does not have immutability policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/storage-blob-immutability-policies-manage.md).
+
+ e. Create a new blob container or use an existing one from the dropdown list. If creating a blob container, provide a container name. If a container doesn't already exist, it's created in the storage account with the newly created share name.
+
+ f. Depending on whether you've created an SMB share or an NFS share, do one of the following steps:
+
+ - **SMB share**: Under **All privilege local user**, select **Create new** or **Use existing**. If you create a new local user, enter a username and password, and then confirm the password. This action assigns permissions to the local user. Modification of share-level permissions is currently not supported. If you select the **Allow only read operations** check box for this share data, you can specify read-only users.
+
+ ![Add SMB share](./media/azure-stack-edge-gpu-deploy-add-shares/add-share-smb-1.png)
+
+ - **NFS share**: Enter the IP addresses of allowed clients that can access the share.
+
+ ![Add NFS share](./media/azure-stack-edge-gpu-deploy-add-shares/add-share-nfs-1.png)
+
+4. Select **Create** to create the share.
+
+ You're notified that the share creation is in progress. After the share is created with the specified settings, the **Shares** tile updates to reflect the new share.
+
+
+## Connect to the share
+
+You can now connect to one or more of the shares that you created in the last step. Depending upon whether you have an SMB or an NFS share, the steps can vary.
+
+The first step is to ensure that the device name can be resolved when you are using SMB or NFS share.
+
+### Modify host file for name resolution
+
+You will now add the IP of the device and the device friendly name that you defined on the local web UI of device to:
+
+- The host file on the client, OR,
+- The host file on the DNS server
+
+> [!IMPORTANT]
+> We recommend that you modify the host file on the DNS server for the device name resolution.
+
+On your Windows client that you are using to connect to the device, take the following steps:
+
+1. Start **Notepad** as an administrator, and then open the **hosts** file located at `C:\Windows\System32\Drivers\etc`.
+
+ ![Windows Explorer hosts file](media/azure-stack-edge-gpu-deploy-add-shares/client-hosts-file-1.png)
++
+2. Add the following entry to your **hosts** file replacing with appropriate values for your device:
+
+ ```
+ <Device IP> <device friendly name>
+ ```
+ You can get the device IP from the **Network** and the device friendly name from the **Device** page in the local web UI. The following screenshot of the hosts file shows the entry:
+
+ ![Windows Explorer hosts file 2](media/azure-stack-edge-gpu-deploy-add-shares/client-hosts-file-2.png)
+
+### Connect to an SMB share
+
+On your Windows Server client connected to your Azure Stack Edge Pro device, connect to an SMB share by entering the commands:
++
+1. In a command window, type:
+
+ `net use \\<Device name>\<share name> /u:<user name for the share>`
+
+ > [!NOTE]
+ > You can connect to an SMB share only with the device name and not via the device IP address.
+
+2. When you're prompted to do so, enter the password for the share.
+ The sample output of this command is presented here.
+
+ ```powershell
+ Microsoft Windows [Version 10.0.18363.476)
+ (c) 2017 Microsoft Corporation. All rights reserved.
+
+ C: \Users\AzureStackEdgeUser>net use \\myasetest1\myasesmbshare1 /u:aseuser
+ Enter the password for 'aseuser' to connect to 'myasetest1':
+ The command completed successfully.
+
+ C: \Users\AzureStackEdgeUser>
+ ```
+
+3. On your keyboard, select Windows + R.
+
+4. In the **Run** window, specify the `\\<device name>`, and then select **OK**.
+
+ ![Windows Run dialog](media/azure-stack-edge-gpu-deploy-add-shares/run-window-1.png)
+
+ File Explorer opens. You should now be able to view the shares that you created as folders. In File Explorer, double-click a share (folder) to view the content.
+
+ ![Connect to SMB share](./media/azure-stack-edge-gpu-deploy-add-shares/file-explorer-smbshare-1.png)
+
+ The data is written to these shares as it is generated and the device pushes the data to cloud.
+
+### Connect to an NFS share
+
+On your Linux client connected to your Azure Stack Edge Pro device, do the following procedure:
+
+1. Make sure that the client has NFSv4 client installed. To install NFS client, use the following command:
+
+ `sudo apt-get install nfs-common`
+
+ For more information, go to [Install NFSv4 client](https://help.ubuntu.com/community/NFSv4Howto).
+
+2. After the NFS client is installed, mount the NFS share that you created on your Azure Stack Edge Pro device by using the following command:
+
+ `sudo mount -t nfs -o sec=sys,resvport <device IP>:/<NFS share on device> /home/username/<Folder on local Linux computer>`
+
+ You can get the device IP from the **Network** page of your local web UI.
+
+ > [!IMPORTANT]
+ > Use of `sync` option when mounting shares improves the transfer rates of large files.
+ > Before you mount the share, make sure that the directories that will act as mountpoints on your local computer are already created. These directories should not contain any files or subfolders.
+
+ The following example shows how to connect via NFS to a share on your Azure Stack Edge Pro device. The device IP is `10.10.10.60`. The share `mylinuxshare2` is mounted on the ubuntuVM. The share mount point is `/home/azurestackedgeubuntuhost/edge`.
+
+ `sudo mount -t nfs -o sec=sys,resvport 10.10.10.60:/mylinuxshare2 /home/azurestackedgeubuntuhost/Edge`
+
+> [!NOTE]
+> The following caveats are applicable to this release:
+> - After a file is created in the share, renaming of the file isn't supported.
+> - Deleting a file from a share does not delete the entry in the Azure Storage account.
+> - When using `rsync` to copy over NFS, use the `--inplace` flag.
+
+## Next steps
+
+In this tutorial, you learned about the following Azure Stack Edge Pro topics:
+
+> [!div class="checklist"]
+> * Add a share
+> * Connect to share
+
+To learn how to transform your data by using Azure Stack Edge Pro, advance to the next tutorial:
+
+> [!div class="nextstepaction"]
+> [Transform data with Azure Stack Edge Pro](./azure-stack-edge-j-series-deploy-configure-compute.md)
databox-online Azure Stack Edge Gpu Deploy Add Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-add-storage-accounts.md
+
+ Title: Tutorial to transfer data to storage account with Azure Stack Edge Pro GPU| Microsoft Docs
+description: Learn how to add and connect to local and Edge storage accounts on Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/22/2021+
+Customer intent: As an IT admin, I need to understand how to add and connect to storage accounts on Azure Stack Edge Pro so I can use it to transfer data to Azure.
+
+# Tutorial: Transfer data via storage accounts with Azure Stack Edge Pro GPU
++
+This tutorial describes how to add and connect to storage accounts on your Azure Stack Edge Pro device. After you've added the storage accounts, Azure Stack Edge Pro can transfer data to Azure.
+
+This procedure can take around 30 minutes to complete.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Add a storage account
+> * Connect to the storage account
+
+
+## Prerequisites
+
+Before you add storage accounts to Azure Stack Edge Pro, make sure that:
+
+- You've installed your physical device as described in [Install Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-install.md).
+
+- You've activated the physical device as described in [Activate your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
++
+## Add an Edge storage account
+
+To create an Edge storage account, do the following procedure:
+++
+## Connect to the Edge storage account
+
+You can now connect to Edge storage REST APIs over *http* or *https*.
+
+- *Https* is the secure and recommended way.
+- *Http* is used when connecting over trusted networks.
+
+## Connect via http
+
+Connection to Edge storage REST APIs over http requires the following steps:
+
+- Add the Azure consistent service VIP and blob service endpoint to the remote host
+- Verify the connection
+
+Each of these steps is described in the following sections.
+
+### Add device IP address and blob service endpoint to the remote client
+++
+### Verify connection
+
+To verify the connection, you would typically need the following information (may vary) you gathered in the previous step:
+
+- Storage account name.
+- Storage account access key.
+- Blob service endpoint.
+
+You already have the storage account name and the blob service endpoint. You can get the storage account access key by connecting to the device via the Azure Resource Manager using an Azure PowerShell client.
+
+Follow the steps in [Connect to the device via Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md). Once you have signed into the local device APIs via the Azure Resource Manager, get the list of storage accounts on the device. Run the following cmdlet:
+
+`Get-AzureRMStorageAccount`
+
+From the list of the storage accounts on the device, identify the storage account for which you need the access key. Note the storage account name and resource group.
+
+A sample output is shown below:
+
+```azurepowershell
+PS C:\windows\system32> Get-AzureRmStorageAccount
+
+StorageAccountName ResourceGroupName Location SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly
+ -- -- - - - -- -
+myasetiered1 myasetiered1 DBELocal StandardLRS Storage 11/27/2019 7:10:12 PM Succeeded False
+```
+
+To get the access key, run the following cmdlet:
+
+`Get-AzureRmStorageAccountAccessKey`
+
+A sample output is shown below:
+
+```azurepowershell
+PS C:\windows\system32> Get-AzureRmStorageAccountKey
+
+cmdlet Get-AzureRmStorageAccountKey at command pipeline position 1
+Supply values for the following parameters:
+(Type !? for Help.)
+ResourceGroupName: myasetiered1
+Name: myasetiered1
+
+KeyName Value Permissions
+- -- --
+key1 Jb2brrNjRNmArFcDWvL4ufspJjlo+Nie1uh8Mp4YUOVQNbirA1uxEdHeV8Z0dXbsG7emejFWI9hxyR1T93ZncA== Full
+key2 6VANuHzHcJV04EFeyPiWRsFWnHPkgmX1+a3bt5qOQ2qIzohyskIF/2gfNMqp9rlNC/w+mBqQ2mI42QgoJSmavg== Full
+```
+
+Copy and save this key. You will use this key to verify the connection using Azure Storage Explorer.
+
+To verify that the connection is successfully established, use Storage Explorer to attach to an external storage account. If you do not have Storage Explorer, [download Storage Explorer](https://go.microsoft.com/fwlink/?LinkId=708343&clcid=0x409).
+++
+## Connect via https
+
+Connection to Azure Blob storage REST APIs over https requires the following steps:
+
+- Get your blob endpoint certificate
+- Import the certificate on the client or remote host
+- Add the device IP and blob service endpoint to the client or remote host
+- Configure and verify the connection
+
+Each of these steps is described in the following sections.
+
+### Get certificate
+
+Accessing Blob storage over HTTPS requires an SSL certificate for the device. You will also upload this certificate to your Azure Stack Edge Pro device as *.pfx* file with a private key attached to it. For more information on how to create (for test and dev purposes only) and upload these certificates to your Azure Stack Edge Pro device, go to:
+
+- [Create the blob endpoint certificate](azure-stack-edge-gpu-manage-certificates.md#create-certificates-optional).
+- [Upload the blob endpoint certificate](azure-stack-edge-gpu-manage-certificates.md#upload-certificates).
+- [Import certificates on the client accessing the device](azure-stack-edge-gpu-manage-certificates.md#import-certificates-on-the-client-accessing-the-device).
+
+### Import certificate
+
+If using Azure Storage Explorer to connect to the storage accounts on the device, you also need to import certificate into Storage Explorer in PEM format. In Windows environment, Base-64 encoded *.cer* is the same as the PEM format.
+
+Take the following steps to import the certificates on Azure Storage Explorer:
+
+1. Make sure that Azure Storage Explorer is targeting the Azure Stack APIs. Go to **Edit > Target Azure Stack APIs**. When prompted, restart Storage Explorer for the change to take effect.
+
+2. To import SSL certificates, go to **Edit > SSL certificates > Import certificates**.
+
+
+ ![Import cert into Storage Explorer](./media/azure-stack-edge-gpu-deploy-add-storage-accounts/import-cert-storage-explorer-1.png)
+
+3. Navigate and provide the signing chain and blob certificates. Both the signing chain and the blob certificate should be in PEM format which is the same as Base64-encoded format on Windows system. You will be notified that the certificates were successfully imported.
++
+### Add device IP address and blob service endpoint
+
+Follow the same steps to [add device IP address and blob service endpoint when connecting over *http*](#add-device-ip-address-and-blob-service-endpoint-to-the-remote-client).
+
+### Configure and verify connection
+
+Follow the steps to [Configure and verify connection that you used while connecting over *http*](#verify-connection). The only difference is that you should leave the *Use http option* unchecked.
+
+## Next steps
+
+In this tutorial, you learned about the following Azure Stack Edge Pro topics:
+
+> [!div class="checklist"]
+> * Add a storage account
+> * Connect to a storage account
+
+To learn how to transform your data by using Azure Stack Edge Pro, advance to the next tutorial:
+
+> [!div class="nextstepaction"]
+> [Transform data with Azure Stack Edge Pro](./azure-stack-edge-j-series-deploy-configure-compute.md)
databox-online Azure Stack Edge Gpu Deploy Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller.md
Previously updated : 03/05/2021 Last updated : 03/08/2021 # Deploy Azure Data Services on your Azure Stack Edge Pro GPU device
Create a new, dedicated namespace where you will deploy the Data Controller. You
1. The config file should live in the `.kube` folder of your user profile on the local machine. Copy the file to that folder in your user profile.
- ![Location of config file on client](media/azure-stack-edge-j-series-create-kubernetes-cluster/location-config-file.png)
+ ![Location of config file on client](media/azure-stack-edge-gpu-create-kubernetes-cluster/location-config-file.png)
1. Grant the user access to the namespace that you created. Type: `Grant-HcsKubernetesNamespaceAccess -Namespace <Name of namespace> -UserName <User name>`
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Previously updated : 01/07/2021 Last updated : 03/08/2021 Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
To configure a client to access Kubernetes cluster, you will need the Kubernetes
1. In the local web UI of your device, go to **Devices** page. 2. Under the **Device endpoints**, copy the **Kubernetes API service** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
- ![Device page in local UI](./media/azure-stack-edge-j-series-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
+ ![Device page in local UI](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
3. Save the endpoint string. You will use this endpoint string later when configuring a client to access the Kubernetes cluster via kubectl.
databox-online Azure Stack Edge Gpu Deploy Stateful Application Static Provision Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateful-application-static-provision-kubernetes.md
Previously updated : 02/22/2021 Last updated : 03/09/2021
This article shows you how to deploy a single-instance stateful application in Kubernetes using a PersistentVolume (PV) and a deployment. The deployment uses `kubectl` commands on an existing Kubernetes cluster and deploys the MySQL application.
-This procedure is intended for those who have reviewed the [Kubernetes storage on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-storage.md) and are familiar with the concepts of [Kubernetes storage](https://kubernetes.io/docs/concepts/storage/).
+This procedure is intended for those who have reviewed the [Kubernetes storage on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-storage.md) and are familiar with the concepts of [Kubernetes storage](https://kubernetes.io/docs/concepts/storage/).
Azure Stack Edge Pro also supports running Azure SQL Edge containers and these can be deployed in a similar way as detailed here for MySQL. For more information, see [Azure SQL Edge](../azure-sql-edge/overview.md).
You are ready to deploy a stateful application on your Azure Stack Edge Pro devi
To statically provision a PV, you need to create a share on your device. Follow these steps to provision a PV against your SMB share. > [!NOTE]
-> The specific example used in this how-to article does not work with NFS shares. In general, NFS shares can be provisioned on your Azure Stack Edge device with non-database applications.
+> - The specific example used in this how-to article does not work with NFS shares. In general, NFS shares can be provisioned on your Azure Stack Edge device with non-database applications.
+> - To deploy stateful applications that use storage volumes to provide persistent storage, we recommend that you use `StatefulSet`. This example uses `Deployment` with only one replica and is suitable for development and testing.
1. Choose whether you want to create an Edge share or an Edge local share. Follow the instructions in [Add a share](azure-stack-edge-manage-shares.md#add-a-share) to create a share. Make sure to select the check box for **Use the share with Edge compute**.
The PV is no longer bound to the PVC as the PVC was deleted. As the PV was provi
## Next steps To understand how to dynamically provision storage, see
-[Deploy a stateful application via dynamic provisioning on an Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-stateful-application-dynamic-provision-kubernetes.md)
+[Deploy a stateful application via dynamic provisioning on an Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-stateful-application-dynamic-provision-kubernetes.md)
databox-online Azure Stack Edge Gpu Deploy Stateless Application Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateless-application-kubernetes.md
+
+ Title: Deploy Kubernetes stateless application on Azure Stack Edge Pro GPU device using kubectl| Microsoft Docs
+description: Describes how to create and manage a Kubernetes stateless application deployment using kubectl on a Microsoft Azure Stack Edge Pro device.
++++++ Last updated : 03/05/2021+++
+# Deploy a Kubernetes stateless application via kubectl on your Azure Stack Edge Pro GPU device
++
+This article describes how to deploy a stateless application using kubectl commands on an existing Kubernetes cluster. This article also walks you through the process of creating and setting up pods in your stateless application.
+
+## Prerequisites
+
+Before you can create a Kubernetes cluster and use the `kubectl` command-line tool, you need to ensure that:
+
+- You have sign-in credentials to a 1-node Azure Stack Edge Pro device.
+
+- Windows PowerShell 5.0 or later is installed on a Windows client system to access the Azure Stack Edge Pro device. You can have any other client with a Supported operating system as well. This article describes the procedure when using a Windows client. To download the latest version of Windows PowerShell, go to [Installing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell).
+
+- Compute is enabled on the Azure Stack Edge Pro device. To enable compute, go to the **Compute** page in the local UI of the device. Then select a network interface that you want to enable for compute. Select **Enable**. Enabling compute results in the creation of a virtual switch on your device on that network interface. For more information, see [Enable compute network on your Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
+
+- Your Azure Stack Edge Pro device has a Kubernetes cluster server running that is version v1.9 or later. For more information, see [Create and manage a Kubernetes cluster on Microsoft Azure Stack Edge Pro device](azure-stack-edge-gpu-create-kubernetes-cluster.md).
+
+- You have installed `kubectl`.
+
+## Deploy a stateless application
+
+Before we begin, you should have:
+
+1. Created a Kubernetes cluster.
+2. Set up a namespace.
+3. Associated a user with the namespace.
+4. Saved the user configuration to `C:\Users\<username>\.kube`.
+5. Installed `kubectl`.
+
+Now you can begin running and managing stateless application deployments on an Azure Stack Edge Pro device. Before you start using `kubectl`, you need to verify that you have the correct version of `kubectl`.
+
+### Verify you have the correct version of kubectl and set up configuration
+
+To check the version of `kubectl`:
+
+1. Verify that the version of `kubectl` is greater or equal to 1.9:
+
+ ```powershell
+ kubectl version
+ ```
+
+ An example of the output is shown below:
+
+ ```powershell
+ PS C:\WINDOWS\system32> C:\windows\system32\kubectl.exe version
+ Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
+ Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
+ ```
+
+ In this case, the client version of kubectl is v1.15.2 and is compatible to continue.
+
+2. Get a list of the pods running on your Kubernetes cluster. A pod is an application container, or process, running on your Kubernetes cluster.
+
+ ```powershell
+ kubectl get pods -n <namespace-string>
+ ```
+
+ An example of command usage is shown below:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n "test1"
+ No resources found.
+ PS C:\WINDOWS\system32>
+ ```
+
+ The output should state that no resources (pods) are found because there are no applications running on your cluster.
+
+ The command will populate the directory structure of "C:\Users\\&lt;username&gt;\\.kube\" with configuration files. The kubectl command-line tool will use these files to create and manage stateless applications on your Kubernetes cluster.
+
+3. Manually check the directory structure of "C:\Users\\&lt;username&gt;\\.kube\" to verify *kubectl* has populated it with the following subfolders:
+
+ ```powershell
+ PS C:\Users\username> ls .kube
+
+
+ Directory: C:\Users\user\.kube
+
+ Mode LastWriteTime Length Name
+ - - -
+ d-- 2/18/2020 11:05 AM cache
+ d-- 2/18/2020 11:04 AM http-cache
+ -a- 2/18/2020 10:41 AM 5377 config
+ ```
+
+> [!NOTE]
+> To view a list of all kubectl commands, type `kubectl --help`.
+
+### Create a stateless application using a deployment
+
+Now that you've verified that the kubectl command-line version is correct and you have the required configuration files, you can create a stateless application deployment.
+
+A pod is the basic execution unit of a Kubernetes application, the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod also encapsulates storage resources, a unique network IP, and options that govern how the container(s) should run.
+
+The type of stateless application that you create is an nginx web server deployment.
+
+All kubectl commands you use to create and manage stateless application deployments need to specify the namespace associated with the configuration. You created the namespace while connected to the cluster on the Azure Stack Edge Pro device in the [Create and manage a Kubernetes cluster on Microsoft Azure Stack Edge Pro device](azure-stack-edge-gpu-create-kubernetes-cluster.md) tutorial with `New-HcsKubernetesNamespace`.
+
+To specify the namespace in a kubectl command, use `kubectl <command> -n <namespace-string>`.
+
+Follow these steps to create an nginx deployment:
+
+1. Apply a stateless application by creating a Kubernetes deployment object:
+
+ ```powershell
+ kubectl apply -f <yaml-file> -n <namespace-string>
+ ```
+
+ In this example, the path to the application YAML file is an external source.
+
+ Here is a sample use of the command and its output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl apply -f https://k8s.io/examples/application/deployment.yaml -n "test1"
+
+ deployment.apps/nginx-deployment created
+ ```
+
+ Alternatively, you can save the following markdown to your local machine and substitute the path and filename in the *-f* parameter. For instance, "C:\Kubernetes\deployment.yaml". The configuration for the application deployment would be:
+
+ ```markdown
+ apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
+ kind: Deployment
+ metadata:
+ name: nginx-deployment
+ spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 2 # tells deployment to run 2 pods matching the template
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.7.9
+ ports:
+ - containerPort: 80
+ ```
+
+ This command creates a default nginx-deployment that has two pods to run your application.
+
+2. Get the description of the Kubernetes nginx-deployment you created:
+
+ ```powershell
+ kubectl describe deployment nginx-deployment -n <namespace-string>
+ ```
+
+ A sample use of the command, with output, is shown below:
+
+ ```powershell
+ PS C:\Users\user> kubectl describe deployment nginx-deployment -n "test1"
+
+ Name: nginx-deployment
+ Namespace: test1
+ CreationTimestamp: Tue, 18 Feb 2020 13:35:29 -0800
+ Labels: <none>
+ Annotations: deployment.kubernetes.io/revision: 1
+ kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"test1"},"spec":{"repl...
+ Selector: app=nginx
+ Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
+ StrategyType: RollingUpdate
+ MinReadySeconds: 0
+ RollingUpdateStrategy: 25% max unavailable, 25% max surge
+ Pod Template:
+ Labels: app=nginx
+ Containers:
+ nginx:
+ Image: nginx:1.7.9
+ Port: 80/TCP
+ Host Port: 0/TCP
+ Environment: <none>
+ Mounts: <none>
+ Volumes: <none>
+ Conditions:
+ Type Status Reason
+ -
+ Available True MinimumReplicasAvailable
+ Progressing True NewReplicaSetAvailable
+ OldReplicaSets: <none>
+ NewReplicaSet: nginx-deployment-5754944d6c (2/2 replicas created)
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal ScalingReplicaSet 2m22s deployment-controller Scaled up replica set nginx-deployment-5754944d6c to 2
+ ```
+
+ For the *replicas* setting, you will see:
+
+ ```powershell
+ Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
+ ```
+
+ The *replicas* setting indicates that your deployment specification requires two pods, and that those pods were created and updated and are ready for you to use.
+
+ > [!NOTE]
+ > A replica set replaces pods that are deleted or terminated for any reason, such as in the case of device node failure or a disruptive device upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.
+
+3. To list the pods in your deployment:
+
+ ```powershell
+ kubectl get pods -l app=nginx -n <namespace-string>
+ ```
+
+ A sample use of the command, with output, is shown below:
+
+ ```powershell
+ PS C:\Users\user> kubectl get pods -l app=nginx -n "test1"
+
+ NAME READY STATUS RESTARTS AGE
+ nginx-deployment-5754944d6c-7wqjd 1/1 Running 0 3m13s
+ nginx-deployment-5754944d6c-nfj2h 1/1 Running 0 3m13s
+ ```
+
+ The output verifies that we have two pods with unique names that we can reference using kubectl.
+
+4. To view information on an individual pod in your deployment:
+
+ ```powershell
+ kubectl describe pod <podname-string> -n <namespace-string>
+ ```
+
+ A sample use of the command, with output, is shown below:
+
+ ```powershell
+ PS C:\Users\user> kubectl describe pod "nginx-deployment-5754944d6c-7wqjd" -n "test1"
+
+ Name: nginx-deployment-5754944d6c-7wqjd
+ Namespace: test1
+ Priority: 0
+ Node: k8s-1d9qhq2cl-n1/10.128.46.184
+ Start Time: Tue, 18 Feb 2020 13:35:29 -0800
+ Labels: app=nginx
+ pod-template-hash=5754944d6c
+ Annotations: <none>
+ Status: Running
+ IP: 172.17.246.200
+ Controlled By: ReplicaSet/nginx-deployment-5754944d6c
+ Containers:
+ nginx:
+ Container ID: docker://280b0f76bfdc14cde481dc4f2b8180cf5fbfc90a084042f679d499f863c66979
+ Image: nginx:1.7.9
+ Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
+ Port: 80/TCP
+ Host Port: 0/TCP
+ State: Running
+ Started: Tue, 18 Feb 2020 13:35:35 -0800
+ Ready: True
+ Restart Count: 0
+ Environment: <none>
+ Mounts:
+ /var/run/secrets/kubernetes.io/serviceaccount from default-token-8gksw (ro)
+ Conditions:
+ Type Status
+ Initialized True
+ Ready True
+ ContainersReady True
+ PodScheduled True
+ Volumes:
+ default-token-8gksw:
+ Type: Secret (a volume populated by a Secret)
+ SecretName: default-token-8gksw
+ Optional: false
+ QoS Class: BestEffort
+ Node-Selectors: <none>
+ Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
+ node.kubernetes.io/unreachable:NoExecute for 300s
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 4m58s default-scheduler Successfully assigned test1/nginx-deployment-5754944d6c-7wqjd to k8s-1d9qhq2cl-n1
+ Normal Pulling 4m57s kubelet, k8s-1d9qhq2cl-n1 Pulling image "nginx:1.7.9"
+ Normal Pulled 4m52s kubelet, k8s-1d9qhq2cl-n1 Successfully pulled image "nginx:1.7.9"
+ Normal Created 4m52s kubelet, k8s-1d9qhq2cl-n1 Created container nginx
+ Normal Started 4m52s kubelet, k8s-1d9qhq2cl-n1 Started container nginx
+ ```
+
+### Rescale the application deployment by increasing the replica count
+
+Each pod is meant to run a single instance of a given application. If you want to scale your application horizontally to run multiple instances, you can increase the number of pods to one for each instance. In Kubernetes, this is referred to as replication.
+You can increase the number of pods in your application deployment by applying a new YAML file. The YAML file changes the replicas setting to 4, which increases the number of pods in your deployment to four pods. To increase the number of pods from 2 to 4:
+
+```powershell
+PS C:\WINDOWS\system32> kubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml -n "test1"
+```
+
+Alternatively, you can save the following markdown on your local machine and substitute the path and filename for the *-f* parameter for `kubectl apply`. For instance, "C:\Kubernetes\deployment-scale.yaml". The configuration for the application deployment scale would be:
+
+```markdown
+apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
+kind: Deployment
+metadata:
+ name: nginx-deployment
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 4 # Update the replicas from 2 to 4
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.8
+ ports:
+ - containerPort: 80
+```
+
+To verify that the deployment has four pods:
+
+```powershell
+kubectl get pods -l app=nginx
+```
+
+Example output for a rescaling deployment from two to four pods is shown below:
+
+```powershell
+PS C:\WINDOWS\system32> kubectl get pods -l app=nginx
+
+NAME READY STATUS RESTARTS AGE
+nginx-deployment-148880595-4zdqq 1/1 Running 0 25s
+nginx-deployment-148880595-6zgi1 1/1 Running 0 25s
+nginx-deployment-148880595-fxcez 1/1 Running 0 2m
+nginx-deployment-148880595-rwovn 1/1 Running 0 2m
+```
+
+As you can see from the output, you now have four pods in your deployment that can run your application.
+
+### Delete a Deployment
+
+To delete the deployment, including all the pods, you need to run `kubectl delete deployment` specifying the name of the deployment *nginx-deployment* and the namespace name. To delete the deployment:
+
+ ```powershell
+ kubectl delete deployment nginx-deployment -n <namespace-string>
+ ```
+
+An example of command usage, with output, is shown below:
+
+```powershell
+PS C:\Users\user> kubectl delete deployment nginx-deployment -n "test1"
+deployment.extensions "nginx-deployment" deleted
+```
+
+## Next steps
+
+[Kubernetes Overview](azure-stack-edge-gpu-kubernetes-overview.md)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-cli-python.md
+
+ Title: Deploy VMs on your Azure Stack Edge Pro device GPU via Azure CLI and Python
+description: Describes how to create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device using Azure CLI and Python.
++++++ Last updated : 03/04/2021+
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
++
+# Deploy VMs on your Azure Stack Edge Pro GPU device using Azure CLI and Python
+++
+This tutorial describes how to create and manage a VM on your Azure Stack Edge Pro device using Azure Command Line Interface (CLI) and Python.
+
+## VM deployment workflow
+
+The deployment workflow is illustrated in the following diagram.
+
+![VM deployment workflow](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell/vm-workflow-r.svg)
+
+The high-level summary of the deployment workflow is as follows:
+
+1. Connect to Azure Resource Manager
+2. Create a resource group
+3. Create a storage account
+4. Add blob URI to hosts file
+5. Install certificates
+6. Upload a VHD
+7. Create managed disks from the VHD
+8. Create a VM image from the image managed disk
+9. Create VM with previously created resources
+10. Create a VNet
+11. Create a VNIC using the VNet subnet ID
+
+For a detailed explanation of the workflow diagram, see [Deploy VMs on your Azure Stack Edge Pro device using Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md). For information on how to connect to Azure Resource Manager, see [Connect to Azure Resource Manager using Azure PowerShell](azure-stack-edge-gpu-connect-resource-manager.md).
+
+## Prerequisites
+
+Before you begin creating and managing a VM on your Azure Stack Edge Pro device using Azure CLI and Python, you need to make sure you have completed the prerequisites listed in the following steps:
+
+1. You completed the network settings on your Azure Stack Edge Pro device as described in [Step 1: Configure Azure Stack Edge Pro device](azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-pro-device).
+
+2. Enabled a network interface for compute. This network interface IP is used to create a virtual switch for the VM deployment. The following steps walk you through the process:
+
+ 1. Go to **Compute**. Select the network interface that you will use to create a virtual switch.
+
+ > [!IMPORTANT]
+ > You can only configure one port for compute.
+
+ 2. Enable compute on the network interface. Azure Stack Edge Pro creates and manages a virtual switch corresponding to that network interface.
+
+ <!--If you decide to use another network interface for compute, make sure that you:
+
+ - Delete all the VMs that you have deployed using Azure Resource Manager.
+
+ - Delete all virtual network interfaces and the virtual network associated with this network interface.
+
+ - You can now enable another network interface for compute.-->
+
+3. You created and installed all the certificates on your Azure Stack Edge Pro device and in the trusted store of your client. Follow the procedure described in [Step 2: Create and install certificates](azure-stack-edge-gpu-connect-resource-manager.md#step-2-create-and-install-certificates).
+
+4. You created a Base-64 encoded *.cer* certificate (PEM format) for your Azure Stack Edge Pro device. That certificate is already uploaded as signing chain on the device and installed in the trusted root store on your client. This certificate is also required in *pem* format for Python to work on this client.
+
+ Convert this certificate to `pem` format by using the `certutil` command. You must run this command in the directory that contains your certificate.
+
+ ```powershell
+ certutil.exe <SourceCertificateName.cer> <DestinationCertificateName.pem>
+ ```
+ The following shows sample command usage:
+
+ ```powershell
+ PS C:\Certificates> certutil.exe -encode aze-root.cer aze-root.pem
+ Input Length = 2150
+ Output Length = 3014
+ CertUtil: -encode command completed successfully.
+ PS C:\Certificates>
+ ```
+ You will also add this `pem` to the Python store later.
+
+5. You assigned the device IP in your **Network** page in the local web UI of device. Add this IP to:
+
+ - The host file on the client, OR,
+ - The DNS server configuration
+
+ > [!IMPORTANT]
+ > We recommend that you modify the DNS server configuration for endpoint name resolution.
+
+ 1. Start **Notepad** as an administrator (Administrator privileges is required to save the file), and then open the **hosts** file located at `C:\Windows\System32\Drivers\etc`.
+
+ ![Windows Explorer hosts file](media/azure-stack-edge-gpu-connect-resource-manager/hosts-file.png)
+
+ 2. Add the following entries to your **hosts** file replacing with appropriate values for your device:
+
+ ```
+ <Device IP> login.<appliance name>.<DNS domain>
+ <Device IP> management.<appliance name>.<DNS domain>
+ <Device IP> <storage name>.blob.<appliance name>.<DNS domain>
+ ```
+ 3. Use the following image for reference. Save the **hosts** file.
+
+ ![hosts file in Notepad](media/azure-stack-edge-gpu-deploy-virtual-machine-cli-python/hosts-screenshot-boxed.png)
+
+6. [Download the Python script](https://aka.ms/ase-vm-python) used in this procedure.
+
+## Step 1: Set up Azure CLI/Python on the client
+
+### Verify profile and install Azure CLI
+
+<!--1. Verify the API profile of the client and identify which version of the modules and libraries to include on your client. In this example, the client system will be running Azure Stack 1904 or later. For more information, see [Azure Resource Manager API profiles](/azure-stack/user/azure-stack-version-profiles?view=azs-1908&preserve-view=true#azure-resource-manager-api-profiles).-->
+
+1. Install Azure CLI on your client. In this example, Azure CLI 2.0.80 was installed. To verify the version of Azure CLI, run the `az --version` command.
+
+ The following is sample output from the above command:
+
+ ```output
+ PS C:\windows\system32> az --version
+ azure-cli 2.0.80
+
+ command-modules-nspkg 2.0.3
+ core 2.0.80
+ nspkg 3.0.4
+ telemetry 1.0.4
+ Extensions:
+ azure-cli-iot-ext 0.7.1
+
+ Python location 'C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\python.exe'
+ Extensions directory 'C:\.azure\cliextensions'
+
+ Python (Windows) 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 02:47:15) [MSC v.1900 32 bit (Intel)]
+
+ Legal docs and information: aka.ms/AzureCliLegal
+
+ Your CLI is up-to-date.
+
+ Please let us know how we are doing: https://aka.ms/clihats
+ PS C:\windows\system32>
+ ```
+
+ If you do not have Azure CLI, download and [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows). You can run Azure CLI using Windows command prompt or through Windows PowerShell.
+
+2. Make a note of the CLI's Python location. You need the Python location to determine the location of the trusted root certificate store for Azure CLI.
+
+3. To run the sample script used in this article, you will need the following Python library versions:
+
+ ```powershell
+ azure-common==1.1.23
+ azure-mgmt-resource==2.1.0
+ azure-mgmt-network==2.7.0
+ azure-mgmt-compute==5.0.0
+ azure-mgmt-storage==1.5.0
+ azure-storage-blob==1.2.0rc1
+ haikunator
+ msrestazure==0.6.2
+ ```
+ To install the versions, run the following command:
+
+ ```powershell
+ .\python.exe -m pip install haikunator
+ ```
+
+ The following sample output shows the installation of Haikunator:
+
+ ```output
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python.exe -m pip install haikunator
+
+ Collecting haikunator
+ Downloading https://files.pythonhosted.org/packages/43/fa/130968f1a1bb1461c287b9ff35c630460801783243acda2cbf3a4c5964a5/haikunator-2.1.0-py2.py3-none-any.whl
+
+ Installing collected packages: haikunator
+ Successfully installed haikunator-2.1.0
+ You are using pip version 10.0.1, however version 20.0.1 is available.
+ You should consider upgrading using the 'python -m pip install --upgrade pip' command.
+
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
+
+ The following sample output shows the installation of pip for `msrestazure`:
+
+ ```output
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python.exe -m pip install msrestazure==0.6.2
+ Requirement already satisfied: msrestazure==0.6.2 in c:\program files (x86)\microsoft sdks\azure\cli2\lib\site-packages (0.6.2)
+ Requirement already satisfied: msrest<2.0.0,>=0.6.0 in c:\program files (x86)\microsoft sdks\azure\cli2\lib\site-packages (from msrestazure==0.6.2) (0.6.10)
+ === CUT =========================== CUT ==================================
+ Requirement already satisfied: cffi!=1.11.3,>=1.8 in c:\program files (x86)\microsoft sdks\azure\cli2\lib\site-packages (from cryptography>=1.1.0->adal<2.0.0,>=0.6.0->msrestazure==0.6.2) (1.13.2)
+ Requirement already satisfied: pycparser in c:\program files (x86)\microsoft sdks\azure\cli2\lib\site-packages (from cffi!=1.11.3,>=1.8->cryptography>=1.1.0->adal<2.0.0,>=0.6.0->msrestazure==0.6.2) (2.18)
+ You are using pip version 10.0.1, however version 20.0.1 is available.
+ You should consider upgrading using the 'python -m pip install --upgrade pip' command.
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
+
+### Trust the Azure Stack Edge Pro CA root certificate
+
+1. Find the certificate location on your machine. The location may vary depending on where you installed `az cli`. Run Windows PowerShell as administrator. Switch to the path where `az cli` installed Python: `C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\python.exe`.
+
+ To get the certificate location, type the following command:
+
+ ```powershell
+ .\python -c "import certifi; print(certifi.where())"
+ ```
+
+ The cmdlet returns the certificate location, as seen below:
+
+ ```output
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python -c "import certifi; print(certifi.where())"
+ C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\certifi\cacert.pem
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
+
+ Make a note of this location as you will use it later - `C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\certifi\cacert.pem`
+
+2. Trust the Azure Stack Edge Pro CA root certificate by appending it to the existing Python certificate. You will provide the path to where you saved the PEM certificate earlier.
+
+ ```powershell
+ $pemFile = "<Path to the pem format certificate>"
+ ```
+ An example path would be "C:\VM-scripts\rootteam3device.pem"
+
+ Then type the following series of commands into Windows PowerShell:
+
+ ```powershell
+ $root = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
+ $root.Import($pemFile)
+
+ Write-Host "Extracting required information from the cert file"
+ $md5Hash = (Get-FileHash -Path $pemFile -Algorithm MD5).Hash.ToLower()
+ $sha1Hash = (Get-FileHash -Path $pemFile -Algorithm SHA1).Hash.ToLower()
+ $sha256Hash = (Get-FileHash -Path $pemFile -Algorithm SHA256).Hash.ToLower()
+
+ $issuerEntry = [string]::Format("# Issuer: {0}", $root.Issuer)
+ $subjectEntry = [string]::Format("# Subject: {0}", $root.Subject)
+ $labelEntry = [string]::Format("# Label: {0}", $root.Subject.Split('=')[-1])
+ $serialEntry = [string]::Format("# Serial: {0}", $root.GetSerialNumberString().ToLower())
+ $md5Entry = [string]::Format("# MD5 Fingerprint: {0}", $md5Hash)
+ $sha1Entry= [string]::Format("# SHA1 Fingerprint: {0}", $sha1Hash)
+ $sha256Entry = [string]::Format("# SHA256 Fingerprint: {0}", $sha256Hash)
+ $certText = (Get-Content -Path $pemFile -Raw).ToString().Replace("`r`n","`n")
+
+ $rootCertEntry = "`n" + $issuerEntry + "`n" + $subjectEntry + "`n" + $labelEntry + "`n" + `
+ $serialEntry + "`n" + $md5Entry + "`n" + $sha1Entry + "`n" + $sha256Entry + "`n" + $certText
+
+ Write-Host "Adding the certificate content to Python Cert store"
+ Add-Content "${env:ProgramFiles(x86)}\Microsoft SDKs\Azure\CLI2\Lib\site-packages\certifi\cacert.pem" $rootCertEntry
+
+ Write-Host "Python Cert store was updated to allow the Azure Stack Edge Pro CA root certificate"
+ ```
+
+### Connect to Azure Stack Edge Pro
+
+1. Register your Azure Stack Edge Pro environment by running the `az cloud register` command.
+
+ In some scenarios, direct outbound internet connectivity is routed through a proxy or firewall, which enforces SSL interception. In these cases, the az cloud register command can fail with an error such as \"Unable to get endpoints from the cloud.\" To work around this error, set the following environment variables in Windows PowerShell:
+
+ ```powershell
+ $ENV:AZURE_CLI_DISABLE_CONNECTION_VERIFICATION = 1
+ $ENV:ADAL_PYTHON_SSL_NO_VERIFY = 1
+ ```
+
+2. Set environment variables for the script for Azure Resource Manager endpoint, location where the resources are created and the path to where the source VHD is located. The location for the resources is fixed across all the Azure Stack Edge Pro devices and is set to `dbelocal`. You also need to specify the address prefixes and private IP address. All the following environment variables are values based on your values except for `AZURE_RESOURCE_LOCATION`, which should be hardcoded to `"dbelocal"`.
+
+ ```powershell
+ $ENV:ARM_ENDPOINT = "https://management.team3device.teatraining1.com"
+ $ENV:AZURE_RESOURCE_LOCATION = "dbelocal"
+ $ENV:VHD_FILE_PATH = "C:\Downloads\Ubuntu1604\Ubuntu13.vhd"
+ $ENV:ADDRESS_PREFIXES = "5.5.0.0/16"
+ $ENV:PRIVATE_IP_ADDRESS = "5.5.174.126"
+ ```
+
+3. Register your environment. Use the following parameters when running az cloud register:
+
+ | Value | Description | Example |
+ | | | |
+ | Environment name | The name of the environment you are trying to connect to | Provide a name, for example, `aze-environ` |
+ | Resource Manager endpoint | This URL is `https://Management.<appliancename><dnsdomain>`. <br> To get this URL, go to **Devices** page in the local web UI of your device. |For example, `https://management.team3device.teatraining1.com`. |
+
+ ```powershell
+ az cloud register -n <environmentname> --endpoint-resource-manager "https://management.<appliance name>.<DNS domain>"
+ ```
+ The following shows sample usage of the above command:
+
+ ```powershell
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> az cloud register -n az-new-env --endpoint-resource-manager "https://management.team3device.teatraining1.com"
+ ```
+
+
+4. Set the active environment by using the following commands:
+
+ ```powershell
+ az cloud set -n <EnvironmentName>
+ ```
+ The following shows sample usage of the above command:
+
+ ```powershell
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> az cloud set -n az-new-env
+ Switched active cloud to 'az-new-env'.
+ Use 'az login' to log in to this cloud.
+ Use 'az account set' to set the active subscription.
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
+
+4. Sign in to your Azure Stack Edge Pro environment by using the `az login` command. You can sign in to the Azure Stack Edge Pro environment either as a user or as a [service principal](../active-directory/develop/app-objects-and-service-principals.md).
+
+ Follow these steps to sign in as a *user*:
+
+ You can either specify the username and password directly within the `az login` command, or authenticate by using a browser. You must do the latter if your account has multi-factor authentication enabled.
+
+ The following shows sample usage of `az login`:
+
+ ```powershell
+ PS C:\Certificates> az login -u EdgeARMuser
+ ```
+ After using the login command, you are prompted for a password. Provide the Azure Resource Manager password.
+
+ The following shows sample output for a successful sign-in after supplying the password:
+
+ ```output
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> az login -u EdgeARMuser
+ Password:
+ [
+ {
+ "cloudName": "az-new-env",
+ "id": "A4257FDE-B946-4E01-ADE7-674760B8D1A3",
+ "isDefault": true,
+ "name": "Default Provider Subscription",
+ "state": "Enabled",
+ "tenantId": "c0257de7-538f-415c-993a-1b87a031879d",
+ "user": {
+ "name": "EdgeArmUser@localhost",
+ "type": "user"
+ }
+ }
+ ]
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
+ Make a note of the `id` and `tenantId` values as these values correspond to your Azure Resource Manager Subscription ID and Azure Resource Manager Tenant ID respectively and will be used in the later step.
+
+ The following environment variables need to be set to work as *service principal*:
+
+ ```
+ $ENV:ARM_TENANT_ID = "c0257de7-538f-415c-993a-1b87a031879d"
+ $ENV:ARM_CLIENT_ID = "cbd868c5-7207-431f-8d16-1cb144b50971"
+ $ENV:ARM_CLIENT_SECRET - "<Your Azure Resource Manager password>"
+ $ENV:ARM_SUBSCRIPTION_ID = "A4257FDE-B946-4E01-ADE7-674760B8D1A3"
+ ```
+
+ Your Azure Resource Manager Client ID is hard-coded. Your Azure Resource Manager Tenant ID and Azure Resource Manager Subscription ID are both present in the output of `az login` command you ran earlier. The Azure Resource Manager Client secret is the Azure Resource Manager password that you set.
+
+ For more information, see [Azure Resource Manager password](azure-stack-edge-j-series-set-azure-resource-manager-password.md).
+
+5. Change the profile to version 2019-03-01-hybrid. To change the profile version, run the following command:
+
+ ```powershell
+ az cloud update --profile 2019-03-01-hybrid
+ ```
+
+ The following shows sample usage of `az cloud update`:
+
+ ```powershell
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> az cloud update --profile 2019-03-01-hybrid
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
+
+## Step 2: Create a VM
+
+A Python script is provided to you to create a VM. Depending on whether you are signed in as user or set as service principal, the script takes the input accordingly and creates a VM.
+
+1. Run the Python script from the same directory where Python is installed.
+
+ `.\python.exe example_dbe_arguments_name_https.py cli`
+
+2. When the script runs, uploading the VHD takes 20-30 minutes. To view the progress of the upload operation, you can use Azure Storage Explorer or AzCopy.
+
+ Here is a sample output of a successful run of the script. The script creates all the resources within a resource group, uses those resources to create a VM, and finally deletes the resource group including all the resources it created.
+
+
+ ```powershell
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2> .\python.exe example_dbe_arguments_name_https.py cli
+
+ Create Resource Group
+ Create a storage account
+ Uploading to Azure Stack Storage as blob:
+ ubuntu13.vhd
+
+ Listing blobs...
+ ubuntu13.vhd
+
+ VM image resource id:
+ /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/azure-sample-group-virtual-machines118/providers/Microsoft.Compute/images/UbuntuImage
+
+ Create Vnet
+ Create Subnet
+ Create NIC
+ Creating Linux Virtual Machine
+ Tag Virtual Machine
+ Create (empty) managed Data Disk
+ Get Virtual Machine by Name
+ Attach Data Disk
+ Detach Data Disk
+ Deallocating the VM (to prepare for a disk resize)
+ Update OS disk size
+ Start VM
+ Restart VM
+ Stop VM
+
+ List VMs in subscription
+ VM: VmName118
+
+ List VMs in resource group
+ VM: VmName118
+
+ Delete VM
+ All example operations completed successfully!
+
+ Delete Resource Group
+ Deleted: azure-sample-group-virtual-machines118
+ PS C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2>
+ ```
++
+## Next steps
+
+[Common Az CLI commands for Linux virtual machines](../virtual-machines/linux/cli-manage.md)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md
Previously updated : 02/22/2021 Last updated : 03/08/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using an Azure PowerShell script so that I can efficiently manage my VMs.
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
1. Start **Notepad** as an administrator (Administrator privileges is required to save the file), and then open the **hosts** file located at `C:\Windows\System32\Drivers\etc`.
- ![Windows Explorer hosts file](media/azure-stack-edge-j-series-connect-resource-manager/hosts-file.png)
+ ![Windows Explorer hosts file](media/azure-stack-edge-gpu-connect-resource-manager/hosts-file.png)
2. Add the following entries to your **hosts** file replacing with appropriate values for your device:
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
3. Use the following image for reference. Save the **hosts** file.
- ![hosts file in Notepad](media/azure-stack-edge-j-series-deploy-virtual-machine-cli-python/hosts-screenshot-boxed.png)
+ ![hosts file in Notepad](media/azure-stack-edge-gpu-deploy-virtual-machine-cli-python/hosts-screenshot-boxed.png)
2. [Download the PowerShell script](https://aka.ms/ase-vm-powershell) used in this procedure.
databox-online Azure Stack Edge Gpu Manage Bandwidth Schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-bandwidth-schedules.md
+
+ Title: Azure Stack Edge Pro GPU manage bandwidth schedules | Microsoft Docs
+description: Describes how to use the Azure portal to manage bandwidth schedules on your Azure Stack Edge Pro GPU.
++++++ Last updated : 02/22/2021++
+# Use the Azure portal to manage bandwidth schedules on your Azure Stack Edge Pro GPU
++
+This article describes how to manage bandwidth schedules on your Azure Stack Edge Pro. Bandwidth schedules allow you to configure network bandwidth usage across multiple time-of-day schedules. These schedules can be applied to the upload and download operations from your device to the cloud.
+
+You can add, modify, or delete the bandwidth schedules for your Azure Stack Edge Pro via the Azure portal.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Add a schedule
+> * Modify schedule
+> * Delete a schedule
++
+## Add a schedule
+
+Do the following steps in the Azure portal to add a schedule.
+
+1. In the Azure portal for your Azure Stack Edge resource, go to **Bandwidth**.
+2. In the right-pane, select **+ Add schedule**.
+
+ ![Select Bandwidth](media/azure-stack-edge-gpu-manage-bandwidth-schedules/add-schedule-1.png)
+
+3. In the **Add schedule**:
+
+ 1. Provide the **Start day**, **End day**, **Start time**, and **End time** of the schedule.
+ 2. Check the **All day** option if this schedule should run all day.
+ 3. **Bandwidth rate** is the bandwidth in Megabits per second (Mbps) used by your device in operations involving the cloud (both uploads and downloads). Supply a number between 64 and 2,147,483,647 for this field.
+ 4. Select **Unlimited bandwidth** if you do not want to throttle the date upload and download.
+ 5. Select **Add**.
+
+ ![Add schedule](media/azure-stack-edge-gpu-manage-bandwidth-schedules/add-schedule-2.png)
+
+3. A schedule is created with the specified parameters. This schedule is then displayed in the list of bandwidth schedules in the portal.
+
+ ![Updated list of bandwidth schedules](media/azure-stack-edge-gpu-manage-bandwidth-schedules/add-schedule-3.png)
+
+## Edit schedule
+
+Do the following steps to edit a bandwidth schedule.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Bandwidth**.
+2. From the list of bandwidth schedules, select a schedule that you want to modify.
+
+ ![Select bandwidth schedule](media/azure-stack-edge-gpu-manage-bandwidth-schedules/modify-schedule-1.png)
+
+3. Make the desired changes and save the changes.
+
+ ![Modify user](media/azure-stack-edge-gpu-manage-bandwidth-schedules/modify-schedule-2.png)
+
+4. After the schedule is modified, the list of schedules is updated to reflect the modified schedule.
+
+ ![Modify user 2](media/azure-stack-edge-gpu-manage-bandwidth-schedules/modify-schedule-3.png)
++
+## Delete a schedule
+
+Do the following steps to delete a bandwidth schedule associated with your Azure Stack Edge Pro device.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Bandwidth**.
+
+2. From the list of bandwidth schedules, select a schedule that you want to delete. In the **Edit schedule**, select **Delete**. When prompted for confirmation, select **Yes**.
+
+ ![Delete a user](media/azure-stack-edge-gpu-manage-bandwidth-schedules/delete-schedule-2.png)
+
+3. After the schedule is deleted, the list of schedules is updated.
++
+## Next steps
+
+- Learn how to [Manage shares](azure-stack-edge-gpu-manage-shares.md).
databox-online Azure Stack Edge Gpu Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-compute.md
+
+ Title: Azure Stack Edge Pro GPU compute management | Microsoft Docs
+description: Describes how to manage the Edge compute settings such as trigger, modules, view compute configuration, remove configuration via the Azure portal on your Azure Stack Edge Pro GPU.
++++++ Last updated : 03/08/2021++
+# Manage compute on your Azure Stack Edge Pro GPU
++
+This article describes how to manage compute via IoT Edge service on your Azure Stack Edge Pro GPU device. You can manage the compute via the Azure portal or via the local web UI. Use the Azure portal to manage modules, triggers, and IoT Edge configuration, and the local web UI to manage compute network settings.
+++
+## Manage triggers
+
+Events are things that happen within your cloud environment or on your device that you might want to take action on. For example, when a file is created in a share, it is an event. Triggers raise the events. For your Azure Stack Edge Pro, triggers can be in response to file events or a schedule.
+
+- **File**: These triggers are in response to file events such as creation of a file, modification of a file.
+- **Scheduled**: These triggers are in response to a schedule that you can define with a start date, start time, and the repeat interval.
++
+### Add a trigger
+
+Take the following steps in the Azure portal to create a trigger.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. Go to **Triggers** and select **+ Add trigger** on the command bar.
+
+ ![Select add trigger](media/azure-stack-edge-gpu-manage-compute/add-trigger-1-m.png)
+
+2. In **Add trigger** blade, provide a unique name for your trigger.
+
+ <!--Trigger names can only contain numbers, lowercase letters, and hyphens. The share name must be between 3 and 63 characters long and begin with a letter or a number. Each hyphen must be preceded and followed by a non-hyphen character.-->
+
+3. Select a **Type** for the trigger. Choose **File** when the trigger is in response to a file event. Select **Scheduled** when you want the trigger to start at a defined time and run at a specified repeat interval. Depending on your selection, a different set of options is presented.
+
+ - **File trigger** - Choose from the dropdown list a mounted share. When a file event is fired in this share, the trigger would invoke an Azure Function.
+
+ ![Add SMB share](media/azure-stack-edge-gpu-manage-compute/add-file-trigger.png)
+
+ - **Scheduled trigger** - Specify the start date/time, and the repeat interval in hours, minutes, or seconds. Also, enter the name for a topic. A topic will give you the flexibility to route the trigger to a module deployed on the device.
+
+ An example route string is: `"route3": "FROM /* WHERE topic = 'topicname' INTO BrokeredEndpoint("modules/modulename/inputs/input1")"`.
+
+ ![Add NFS share](media/azure-stack-edge-gpu-manage-compute/add-scheduled-trigger.png)
+
+4. Select **Add** to create the trigger. A notification shows that the trigger creation is in progress. After the trigger is created, the blade updates to reflect the new trigger.
+
+ ![Updated trigger list](media/azure-stack-edge-gpu-manage-compute/add-trigger-2.png)
+
+### Delete a trigger
+
+Take the following steps in the Azure portal to delete a trigger.
+
+1. From the list of triggers, select the trigger that you want to delete.
+
+ ![Select trigger](media/azure-stack-edge-gpu-manage-compute/delete-trigger-1.png)
+
+2. Right-click and then select **Delete**.
+
+ ![Select delete](media/azure-stack-edge-gpu-manage-compute/delete-trigger-2.png)
+
+3. When prompted for confirmation, click **Yes**.
+
+ ![Confirm delete](media/azure-stack-edge-gpu-manage-compute/delete-trigger-3.png)
+
+The list of triggers updates to reflect the deletion.
+
+## Manage IoT Edge configuration
+
+Use the Azure portal to view the compute configuration, remove an existing compute configuration, or to refresh the compute configuration to sync up access keys for the IoT device and IoT Edge device for your Azure Stack Edge Pro.
+
+### View IoT Edge configuration
+
+Take the following steps in the Azure portal to view the IoT Edge configuration for your device.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. After IoT Edge service is enabled on your device, the Overview page indicates that the IoT Edge service is running fine.
+
+ ![Select View compute](media/azure-stack-edge-gpu-manage-compute/view-compute-1.png)
+
+2. Go to **Properties** to view the IoT Edge configuration on your device. When you configured compute, you created an IoT Hub resource. Under that IoT Hub resource, an IoT device and an IoT Edge device are configured. Only the Linux modules are supported to run on the IoT Edge device.
+
+ ![View configuration](media/azure-stack-edge-gpu-manage-compute/view-compute-2.png)
++
+### Remove IoT Edge service
+
+Take the following steps in the Azure portal to remove the existing IoT Edge configuration for your device.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. Go to **Overview** and select **Remove** on the command bar.
+
+ ![Select Remove compute](media/azure-stack-edge-gpu-manage-compute/remove-compute-1.png)
+
+2. If you remove the IoT Edge service, the action is irreversible and can't be undone. The modules and triggers that you created will also be deleted. You will need to reconfigure your device in case you need to use IoT Edge again. When prompted for confirmation, select **OK**.
+
+ ![Select Remove compute 2](media/azure-stack-edge-gpu-manage-compute/remove-compute-2.png)
+
+### Sync up IoT device and IoT Edge device access keys
+
+When you configure compute on your Azure Stack Edge Pro, an IoT device and an IoT Edge device are created. These devices are automatically assigned symmetric access keys. As a security best practice, these keys are rotated regularly via the IoT Hub service.
+
+To rotate these keys, you can go to the IoT Hub service that you created and select the IoT device or the IoT Edge device. Each device has a primary access key and a secondary access keys. Assign the primary access key to the secondary access key and then regenerate the primary access key.
+
+If your IoT device and IoT Edge device keys have been rotated, then you need to refresh the configuration on your Azure Stack Edge Pro to get the latest access keys. The sync helps the device get the latest keys for your IoT device and IoT Edge device. Azure Stack Edge Pro uses only the primary access keys.
+
+Take the following steps in the Azure portal to sync the access keys for your device.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge compute**. Go to **Overview** and select **Refresh configuration** on the command bar.
+
+ ![Select Refresh configuration](media/azure-stack-edge-gpu-manage-compute/refresh-configuration-1.png)
+
+2. Select **Yes** when prompted for confirmation.
+
+ ![Select Yes when prompted](media/azure-stack-edge-gpu-manage-compute/refresh-configuration-2.png)
+
+3. Exit out of the dialog once the sync is complete.
+
+## Change external service IPs for containers
+
+Kubernetes external service IPs are used to reach out to services that are exposed outside the Kubernetes cluster. After your device is activated, you can set or modify the external service IPs for containerized workloads for your device by accessing the local UI.
++
+1. In the local UI of the device, go to **Compute**.
+1. Select the port whose network is configured for compute. In the blade that opens up, specify (new) or modify (if existing) the Kubernetes external service IPs. These IPs are used for any services that need to be exposed outside of the Kubernetes cluster.
+ - You need a minimum of 1 service IP for the `edgehub` service that runs on your device and is used by IoT Edge modules.
+ - You will need an IP for each additional IoT Edge module or container that you intend to deploy.
+ - These are static, contiguous IPs.
+
+ ![Change Kubernetes service IPs](media/azure-stack-edge-gpu-manage-compute/change-service-ips-1.png)
+
+1. Select **Apply**. After the IPs are applied, your device does not need a restart or a reboot. The new IPs take effect immediately.
++
+## Next steps
+
+- Learn how to [troubleshoot your Azure Stack Edge Pro](azure-stack-edge-gpu-troubleshoot.md).
databox-online Azure Stack Edge Gpu Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-shares.md
+
+ Title: Azure Stack Edge Pro GPU share management | Microsoft Docs
+description: Describes how to use the Azure portal to manage shares on your Azure Stack Edge Pro GPU.
++++++ Last updated : 02/22/2021++
+# Use Azure portal to manage shares on your Azure Stack Edge Pro
++
+This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+
+## About shares
+
+To transfer data to Azure, you need to create shares on your Azure Stack Edge Pro. The shares that you add on the Azure Stack Edge Pro device can be local shares or shares that push data to cloud.
+
+ - **Local shares**: Use these shares when you want the data to be processed locally on the device.
+ - **Shares**: Use these shares when you want the device data to be automatically pushed to your storage account in the cloud. All the cloud functions such as **Refresh** and **Sync storage keys** apply to the shares.
++
+## Add a share
+
+Do the following steps in the Azure portal to create a share.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
+
+ ![Select add share](media/azure-stack-edge-gpu-manage-shares/add-share-1.png)
+
+2. In **Add Share**, specify the share settings. Provide a unique name for your share.
+
+ Share names can only contain numbers, lowercase letters, and hyphens. The share name must be between 3 and 63 characters long and begin with a letter or a number. Each hyphen must be preceded and followed by a non-hyphen character.
+
+3. Select a **Type** for the share. The type can be **SMB** or **NFS**, with SMB being the default. SMB is the standard for Windows clients, and NFS is used for Linux clients. Depending upon whether you choose SMB or NFS shares, options presented are slightly different.
+
+4. Provide a **Storage account** where the share lives. A container is created in the storage account with the share name if the container already does not exist. If the container already exists, then the existing container is used.
+
+5. From the dropdown list, choose the **Storage service** from block blob, page blob, or files. The type of the service chosen depends on which format you want the data to reside in Azure. For example, in this instance, we want the data to reside as block blobs in Azure, hence we select **Block Blob**. If choosing **Page Blob**, you must ensure that your data is 512 bytes aligned. Use **Page blob** for VHDs or VHDX that are always 512 bytes aligned.
+
+6. This step depends on whether you are creating an SMB or an NFS share.
+ - **If creating an SMB share** - In the **All privilege local user** field, choose from **Create new** or **Use existing**. If creating a new local user, provide the **username**, **password**, and then confirm password. This assigns the permissions to the local user. After you have assigned the permissions here, you can then use File Explorer to modify these permissions.
+
+ ![Add SMB share](media/azure-stack-edge-gpu-manage-shares/add-smb-share.png)
+
+ If you check allow only read operations for this share data, you can specify read-only users.
+ - **If creating an NFS share** - You need to supply the **IP addresses of the allowed clients** that can access the share.
+
+ ![Add NFS share](media/azure-stack-edge-gpu-manage-shares/add-nfs-share.png)
+
+7. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the share is automatically mounted after it is created. When this option is selected, the Edge module can also use the compute with the local mount point.
+
+8. Click **Create** to create the share. You are notified that the share creation is in progress. After the share is created with the specified settings, the **Shares** blade updates to reflect the new share.
+
+## Add a local share
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
+
+ ![Select add share 2](media/azure-stack-edge-gpu-manage-shares/add-local-share-1.png)
+
+2. In **Add Share**, specify the share settings. Provide a unique name for your share.
+
+ Share names can only contain numbers, lowercase and uppercase letters, and hyphens. The share name must be between 3 and 63 characters long and begin with a letter or a number. Each hyphen must be preceded and followed by a non-hyphen character.
+
+3. Select a **Type** for the share. The type can be **SMB** or **NFS**, with SMB being the default. SMB is the standard for Windows clients, and NFS is used for Linux clients. Depending upon whether you choose SMB or NFS shares, options presented are slightly different.
+
+ > [!IMPORTANT]
+ > Make sure that the Azure Storage account that you use does not have immutability policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/storage-blob-immutability-policies-manage.md).
+
+4. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the Edge module can use the compute with the local mount point.
+
+5. Select **Configure as Edge local shares**. The data in local shares will stay locally on the device. You can process this data locally.
+
+6. In the **All privilege local user** field, choose from **Create new** or **Use existing**.
+
+7. Select **Create**.
+
+ ![Create local share](media/azure-stack-edge-gpu-manage-shares/add-local-share-2.png)
+
+ You see a notification that the share creation is in progress. After the share is created with the specified settings, the **Shares** blade updates to reflect the new share.
+
+ ![View updates Shares blade](media/azure-stack-edge-gpu-manage-shares/add-local-share-3.png)
+
+ Select the share to view the local mountpoint for the Edge compute modules for this share.
+
+ ![View local share details](media/azure-stack-edge-gpu-manage-shares/add-local-share-4.png)
+
+## Mount a share
+
+If you created a share before you configured compute on your Azure Stack Edge Pro device, you will need to mount the share. Take the following steps to mount a share.
++
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
+
+ ![Select share](media/azure-stack-edge-gpu-manage-shares/mount-share-1.png)
+
+2. Select **Mount**.
+
+ ![Select mount](media/azure-stack-edge-gpu-manage-shares/mount-share-2.png)
+
+3. When prompted for confirmation, select **Yes**. This will mount the share.
+
+ ![Confirm mount](media/azure-stack-edge-gpu-manage-shares/mount-share-3.png)
+
+4. After the share is mounted, go to the list of shares. You'll see that the **Used for compute** column shows the share status as **Enabled**.
+
+ ![Share mounted](media/azure-stack-edge-gpu-manage-shares/mount-share-4.png)
+
+5. Select the share again to view the local mountpoint for the share. Edge compute module uses this local mountpoint for the share.
+
+ ![Local mountpoint for the share](media/azure-stack-edge-gpu-manage-shares/mount-share-5.png)
+
+## Unmount a share
+
+Do the following steps in the Azure portal to unmount a share.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount is not used by any modules. If the share is used by a module, then you will see issues with the corresponding module.
+
+ ![Select share 2](media/azure-stack-edge-gpu-manage-shares/unmount-share-1.png)
+
+2. Select **Unmount**.
+
+ ![Select unmount](media/azure-stack-edge-gpu-manage-shares/unmount-share-2.png)
+
+3. When prompted for confirmation, select **Yes**. This will unmount the share.
+
+ ![Confirm unmount](media/azure-stack-edge-gpu-manage-shares/unmount-share-3.png)
+
+4. After the share is unmounted, go to the list of shares. You'll see that **Used for compute** column shows the share status as **Disabled**.
+
+ ![Share unmounted](media/azure-stack-edge-gpu-manage-shares/unmount-share-4.png)
+
+## Delete a share
+
+Do the following steps in the Azure portal to delete a share.
+
+1. From the list of shares, select and click the share that you want to delete.
+
+ ![Select share 3](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
+
+2. Click **Delete**.
+
+ ![Click delete](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
+
+3. When prompted for confirmation, click **Yes**.
+
+ ![Confirm delete](media/azure-stack-edge-gpu-manage-shares/delete-share-3.png)
+
+The list of shares updates to reflect the deletion.
+
+## Refresh shares
+
+The refresh feature allows you to refresh the contents of a share. When you refresh a share, a search is initiated to find all the Azure objects including blobs and files that were added to the cloud since the last refresh. These additional files are then downloaded to refresh the contents of the share on the device.
+
+> [!IMPORTANT]
+> - You can't refresh local shares.
+> - Permissions and access control lists (ACLs) are not preserved across a refresh operation.
+
+Do the following steps in the Azure portal to refresh a share.
+
+1. In the Azure portal, go to **Shares**. Select and click the share that you want to refresh.
+
+ ![Select share 4](media/azure-stack-edge-gpu-manage-shares/refresh-share-1.png)
+
+2. Click **Refresh**.
+
+ ![Click refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
+
+3. When prompted for confirmation, click **Yes**. A job starts to refresh the contents of the on-premises share.
+
+ ![Confirm refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-3.png)
+
+4. While the refresh is in progress, the refresh option is grayed out in the context menu. Click the job notification to view the refresh job status.
+
+5. The time to refresh depends on the number of files in the Azure container as well as the files on the device. Once the refresh has successfully completed, the share timestamp is updated. Even if the refresh has partial failures, the operation is considered successful and the timestamp is updated. The refresh error logs are also updated.
+
+![Updated timestamp](media/azure-stack-edge-gpu-manage-shares/refresh-share-4.png)
+
+If there is a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
+
+## Sync pinned files
+
+To automatically sync up pinned files, do the following steps in the Azure portal:
+
+1. Select an existing Azure storage account.
+
+2. Go to **Containers** and select **+ Container** to create a container. Name this container as *newcontainer*. Set the **Public access level** to Container.
+
+ ![Automated sync for pinned files 1](media/azure-stack-edge-gpu-manage-shares/image-1.png)
+
+3. Select the container name and set the following metadata:
+
+ - Name = "Pinned"
+ - Value = "True"
+
+ ![Automated sync for pinned files 2](media/azure-stack-edge-gpu-manage-shares/image-2.png)
+
+4. Create a new share on your device. Map it to the pinned container by choosing the existing container option. Mark the share as read only. Create a new user and specify the user name and a corresponding password for this share.
+
+ ![Automated sync for pinned files 3](media/azure-stack-edge-gpu-manage-shares/image-3.png)
+
+5. From the Azure portal, browse to the container which you created. Upload the file which you want to be pinned into the newcontainer which has the metadata set to pinned.
+
+6. Select **Refresh data** in Azure portal for the device to download the pinning policy for that particular Azure Storage container.
+
+ ![Automated sync for pinned files 4](media/azure-stack-edge-gpu-manage-shares/image-4.png)
+
+7. Access the new share that was created on the device. The file that was uploaded to the storage account is now downloaded to the local share.
+
+ Anytime the device is disconnected or reconnected, it triggers refresh. Refresh will bring down only those files that have changed.
++
+## Sync storage keys
+
+If your storage account keys have been rotated, then you need to sync the storage access keys. The sync helps the device get the latest keys for your storage account.
+
+Do the following steps in the Azure portal to sync your storage access key.
+
+1. Go to **Overview** in your resource. From the list of shares, choose and click a share associated with the storage account that you need to sync.
+
+ ![Select share with relevant storage account](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-1.png)
+
+2. Click **Sync storage key**. Click **Yes** when prompted for confirmation.
+
+ ![Select Sync storage key](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-2.png)
+
+3. Exit out of the dialog once the sync is complete.
+
+>[!NOTE]
+> You only have to do this once for a given storage account. You don't need to repeat this action for all the shares associated with the same storage account.
++
+## Next steps
+
+- Learn how to [Manage users via Azure portal](azure-stack-edge-gpu-manage-users.md).
databox-online Azure Stack Edge Gpu Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-storage-accounts.md
+
+ Title: Azure Stack Edge Pro GPU storage account management | Microsoft Docs
+description: Describes how to use the Azure portal to manage storage account on your Azure Stack Edge Pro.
++++++ Last updated : 02/18/2021++
+# Use the Azure portal to manage Edge storage accounts on your Azure Stack Edge Pro
++
+This article describes how to manage Edge storage accounts on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add or delete Edge storage accounts on your device.
+
+## About Edge storage accounts
+
+You can transfer data from your Azure Stack Edge Pro device via the SMB, NFS, or REST protocols. To transfer data to Blob storage using the REST APIs, you need to create Edge storage accounts on your Azure Stack Edge Pro.
+
+The Edge storage accounts that you add on the Azure Stack Edge Pro device are mapped to Azure Storage accounts. Any data written to the Edge storage accounts is automatically pushed to the cloud.
+
+A diagram detailing the two types of accounts and how the data flows from each of these accounts to Azure is shown below:
+
+![Diagram for Blob storage accounts](media/azure-stack-edge-gpu-manage-storage-accounts/ase-blob-storage.svg)
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Add an Edge storage account
+> * Delete an Edge storage account
++
+## Add an Edge storage account
+
+To create an Edge storage account, do the following procedure:
++
+## Delete an Edge storage account
+
+Take the following steps to delete an Edge storage account.
+
+1. Go to **Configuration > Storage accounts** in your resource. From the list of storage accounts, select the storage account you want to delete. From the top command bar, select **Delete storage account**.
+
+ ![Go to list of storage accounts](media/azure-stack-edge-gpu-manage-storage-accounts/delete-edge-storage-account-1.png)
+
+2. In the **Delete storage account** blade, confirm the storage account to delete and select **Delete**.
+
+ ![Confirm and delete storage account](media/azure-stack-edge-gpu-manage-storage-accounts/delete-edge-storage-account-2.png)
+
+The list of storage accounts is updated to reflect the deletion.
++
+## Add, delete a container
+
+You can also add or delete the containers for these storage accounts.
+
+To add a container, take the following steps:
+
+1. Select the storage account that you want to manage. From the top command bar, select **+ Add container**.
+
+ ![Select storage account to add container](media/azure-stack-edge-gpu-manage-storage-accounts/add-container-1.png)
+
+2. Provide a name for your container. This container is created in your Edge storage account as well as the Azure storage account mapped to this account.
+
+ ![Add Edge container](media/azure-stack-edge-gpu-manage-storage-accounts/add-container-2.png)
+
+The list of containers is updated to reflect the newly added container.
+
+![Updated list of containers](media/azure-stack-edge-gpu-manage-storage-accounts/add-container-4.png)
+
+You can now select a container from this list and select **+ Delete container** from the top command bar.
+
+![Delete a container](media/azure-stack-edge-gpu-manage-storage-accounts/add-container-3.png)
+
+## Sync storage keys
+
+You can synchronize the access keys for the Edge (local) storage accounts on your device.
+
+To sync the storage account access key, take the following steps:
+
+1. In your resource, select the storage account that you want to manage. From the top command bar, select **Sync storage key**.
+
+ ![Select sync storage key](media/azure-stack-edge-gpu-manage-storage-accounts/sync-storage-key-1.png)
+
+2. When prompted for confirmation, select **Yes**.
+
+ ![Select sync storage key 2](media/azure-stack-edge-gpu-manage-storage-accounts/sync-storage-key-2.png)
+
+## Next steps
+
+- Learn how to [Manage users via Azure portal](azure-stack-edge-gpu-manage-users.md).
databox-online Azure Stack Edge Gpu Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-users.md
+
+ Title: Azure Stack Edge Pro GPU manage users | Microsoft Docs
+description: Describes how to use the Azure portal to manage users on your Azure Stack Edge Pro GPU.
++++++ Last updated : 02/21/2021++
+# Use the Azure portal to manage users on your Azure Stack Edge Pro
++
+This article describes how to manage users on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, modify, or delete users.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Add a user
+> * Modify user
+> * Delete a user
+
+## About users
+
+Users can be read-only or full privilege. Read-only users can only view the share data. Full privilege users can read share data, write to these shares, and modify or delete the share data.
+
+ - **Full privilege user** - A local user with full access.
+ - **Read-only user** - A local user with read-only access. These users are associated with shares that allow read-only operations.
+
+The user permissions are first defined when the user is created during share creation. They can be modified by using File Explorer.
++
+## Add a user
+
+Do the following steps in the Azure portal to add a user.
+
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Users**. Select **+ Add user** on the command bar.
+
+ ![Select add user](media/azure-stack-edge-gpu-manage-users/add-user-1.png)
+
+2. Specify the username and password for the user you want to add. Confirm the password and select **Add**.
+
+ ![Specify username and password](media/azure-stack-edge-gpu-manage-users/add-user-2.png)
+
+ > [!IMPORTANT]
+ > These users are reserved by the system and should not be used: Administrator, EdgeUser, EdgeSupport, HcsSetupUser, WDAGUtilityAccount, CLIUSR, DefaultAccount, Guest.
+
+3. A notification is shown when the user creation starts and is completed. After the user is created, from the command bar, select **Refresh** to view the updated list of users.
++
+## Modify user
+
+You can change the password associated with a user once the user is created. Select from the list of users. Enter and confirm the new password. Save the changes.
+
+![Modify user](media/azure-stack-edge-gpu-manage-users/modify-user-1.png)
++
+## Delete a user
+
+Do the following steps in the Azure portal to delete a user.
++
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Users**.
+
+ ![Select user to delete](media/azure-stack-edge-gpu-manage-users/delete-user-1.png)
+
+2. Select a user from the list of users and then select **Delete**. When prompted, confirm the deletion.
+
+ ![Select user to delete 2](media/azure-stack-edge-gpu-manage-users/delete-user-2.png)
+
+The list of users is updated to reflect the deleted user.
+
+![Updated list of users](media/azure-stack-edge-gpu-manage-users/delete-user-4.png)
+
+## Next steps
+
+- Learn how to [Manage bandwidth](azure-stack-edge-gpu-manage-bandwidth-schedules.md).
databox-online Azure Stack Edge Gpu Recover Device Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-recover-device-failure.md
You are now ready to deploy the workloads that you were running on the old devic
Follow these steps to restore the data on the Edge cloud shares on your device:
-1. [Add shares](azure-stack-edge-j-series-manage-shares.md#add-a-share) with the same share names created previously on the failed device. Make sure that while creating shares, **Select blob container** is set to **Use existing** option and then select the container that was used with the previous device.
-1. [Add users](azure-stack-edge-j-series-manage-users.md#add-a-user) that had access to the previous device.
-1. [Add storage accounts](azure-stack-edge-j-series-manage-storage-accounts.md#add-an-edge-storage-account) associated with the shares previously on the device. While creating Edge storage accounts, select from an existing container and point to the container that was mapped to the Azure Storage account mapped on the previous device. Any data from the device that was written to the Edge storage account on the previous device was uploaded to the selected storage container in the mapped Azure Storage account.
-1. [Refresh the share](azure-stack-edge-j-series-manage-shares.md#refresh-shares) data from Azure. This pulls down all the cloud data from the existing container to the shares.
+1. [Add shares](azure-stack-edge-gpu-manage-shares.md#add-a-share) with the same share names created previously on the failed device. Make sure that while creating shares, **Select blob container** is set to **Use existing** option and then select the container that was used with the previous device.
+1. [Add users](azure-stack-edge-gpu-manage-users.md#add-a-user) that had access to the previous device.
+1. [Add storage accounts](azure-stack-edge-gpu-manage-storage-accounts.md#add-an-edge-storage-account) associated with the shares previously on the device. While creating Edge storage accounts, select from an existing container and point to the container that was mapped to the Azure Storage account mapped on the previous device. Any data from the device that was written to the Edge storage account on the previous device was uploaded to the selected storage container in the mapped Azure Storage account.
+1. [Refresh the share](azure-stack-edge-gpu-manage-shares.md#refresh-shares) data from Azure. This pulls down all the cloud data from the existing container to the shares.
## Restore Edge local shares
After the replacement device is fully configured, enable the device for local st
Follow these steps to recover the data from local shares: 1. [Configure compute on the device](azure-stack-edge-gpu-deploy-configure-compute.md).
-1. [Add a local share](azure-stack-edge-j-series-manage-shares.md#add-a-local-share) back.
+1. [Add a local share](azure-stack-edge-gpu-manage-shares.md#add-a-local-share) back.
1. Run the recovery procedure provided by the data protection solution of choice. See references from the preceding table. ## Restore VM files and folders
databox-online Azure Stack Edge Gpu Set Azure Resource Manager Password https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-set-azure-resource-manager-password.md
+
+ Title: Set Azure Resource Manager password on your Azure Stack Edge Pro GPU device
+description: Describes set the Azure Resource Manager password on your Azure Stack Edge Pro GPU using Azure PowerShell.
++++++ Last updated : 02/21/2021+
+#Customer intent: As an IT admin, I need to understand how to connect to Azure Resource Manager on my Azure Stack Edge Pro device so that I can manage resources.
++
+# Set Azure Resource Manager password on Azure Stack Edge Pro GPU device
++
+This article describes how to set your Azure Resource Manager password. You need to set this password when you are connecting to the device local APIs via the Azure Resource Manager.
+
+<!--The procedure to set the password can be different depending upon whether you use the Azure portal or the PowerShell cmdlets. Each of these procedures is described in the following sections.-->
++
+## Reset password via the Azure portal
+
+1. In the Azure portal, go to the Azure Stack Edge resource you created to manage your device. Go to **Edge services > Cloud storage gateway**.
+
+ ![Reset EdgeARM user password 1](media/azure-stack-edge-gpu-set-azure-resource-manager-password/set-edgearm-password-1.png)
+
+2. In the right pane, from the command bar, select **Reset Edge ARM password**.
+
+ ![Reset EdgeARM user password 2](media/azure-stack-edge-gpu-set-azure-resource-manager-password/set-edgearm-password-2.png)
+
+3. In the **Reset EdgeArm user password** blade, provide a password to connect to your device local APIs via the Azure Resource Manager. Confirm the password and select **Reset**.
+
+ ![Reset EdgeARM user password 3](media/azure-stack-edge-gpu-set-azure-resource-manager-password/set-edgearm-password-3.png)
+++
+<!--## Reset password via PowerShell
+
+1. In the Azure Portal, go to the Azure Stack Edge resource you created to manage your device. Make a note of the following parameters in the **Overview** page.
+
+ - Azure Stack Edge resource name
+ - Subscription ID
+
+2. Go to **Settings > Properties**. Make a note of the following parameters in the **Properties** page.
+
+ - Resource group
+ - CIK encryption key: Select view and then copy the **Encryption Key**.
+
+ ![Get CIK encryption key](media/azure-stack-edge-gpu-set-azure-resource-manager-password/get-cik-portal.png)
+
+3. Identify a password that you will use to connect to Azure Resource Manager.
+
+4. Start the cloud shell. Select on the icon in the top right corner:
+
+ ![Start cloud shell](media/azure-stack-edge-gpu-set-azure-resource-manager-password/start-cloud-shell.png)
+
+ Once the cloud shell has started, you may need to switch to PowerShell.
+
+ ![Cloud shell](media/azure-stack-edge-gpu-set-azure-resource-manager-password/cloud-shell.png)
++
+5. Set context. Type:
+
+ `Set-AzContext -SubscriptionId <Subscription ID>`
+
+ Here is a sample output:
+
+
+ ```azurepowershell
+ PS Azure:\> Set-AzContext -SubscriptionId 8eb87630-972c-4c36-a270-f330e6c063df
+
+ Name Account SubscriptionName Environment TenantId
+ - - - -- --
+ DataBox_Edge_Test (8eb87630-972c-4c36-a… MSI@50342 DataBox_Edge_Tes AzureCloud 72f988bf-86f1-41af-91ab-2d7…
+
+ PS Azure:/
+ ```
+
+5. If you have any old PS modules, you need to install those.
+
+ `Remove-Module Az.DataBoxEdge -force`
+
+ Here is a sample output. In this example, there were no old modules to be installed.
+
+
+ ```azurepowershell
+ PS Azure:\> Remove-Module Az.DataBoxEdge -force
+ Remove-Module : No modules were removed. Verify that the specification of modules to remove is correct and those modules exist in the runspace.
+ At line:1 char:1
+ + Remove-Module Az.DataBoxEdge -force
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ + CategoryInfo : ResourceUnavailable: (:) [Remove-Module], InvalidOperationException
+ + FullyQualifiedErrorId : Modules_NoModulesRemoved,Microsoft.PowerShell.Commands.RemoveModuleCommand
+
+ PS Azure:\
+ ```
+
+6. Next set of commands will download and run a script to install PowerShell modules.
+
+ ```azurepowershell
+ cd ~/clouddrive
+ wget https://aka.ms/dbe-cmdlet-beta -O Az.DataBoxEdge.zip
+ unzip ./Az.DataBoxEdge.zip
+ Import-Module ~/clouddrive/Az.DataBoxEdge/Az.DataBoxEdge.psd1 -Force
+ ```
+
+7. In the next set of commands, you'll need to provide the resource name, resource group name, encryption key, and the password you identified in the previous step.
+
+ ```azurepowershell
+ $devicename = "<Azure Stack Edge resource name>"
+ $resourceGroup = "<Resource group name>"
+ $cik = "<Encryption key>"
+ $password = "<Password>"
+ ```
+ The password and encryption key parameters must be passed as secure strings. Use the following cmdlets to convert the password and encryption key to secure strings.
+
+ ```azurepowershell
+ $pass = ConvertTo-SecureString $password -AsPlainText -Force
+ $key = ConvertTo-SecureString $cik -AsPlainText -Force
+ ```
+ Use the above generated secure strings as parameters in the Set-AzDataBoxEdgeUser cmdlet to reset the password. Use the same resource group that you used when creating the Azure Stack Edge Pro/Data Box Gateway resource.
+
+ ```azurepowershell
+ Set-AzDataBoxEdgeUser -ResourceGroupName $resourceGroup -DeviceName $devicename -Name EdgeARMUser -Password $pass -EncryptionKey $key
+ ```
+ Here is the sample output.
+
+ ```azurepowershell
+ PS /home/aseuser/clouddrive> $devicename = "myaseresource"
+ PS /home/aseuser/clouddrive> $resourceGroup = "myaserg"
+ PS /home/aseuser/clouddrive> $cik = "54a7450fd7b3c506e10efea4e0c88a9390f37e299fbf43e01fb5dfe483ac036b6d0f85a6246e1926e297f98c0ff84c20a57348c689eff179ce31571fc787ac0a"
+ PS /home/aseuser/clouddrive> $password = "Password2"
+ PS /home/aseuser/clouddrive> $pass = ConvertTo-SecureString $password -AsPlainText -Force
+ PS /home/aseuser/clouddrive> $key = ConvertTo-SecureString $cik -AsPlainText -Force
+ PS /home/aseuser/clouddrive> Set-AzDataBoxEdgeUser -ResourceGroupName $resourceGroup -DeviceName $devicename -Name EdgeARMUser -Password $pass -EncryptionKey $key
+
+ User name Type ResourceGroupName DeviceName
+ - -- -
+ EdgeARMUser ARM myaserg myaseresource
+
+ PS /home/aseuser/clouddrive>
+ ```
+Use the new password to connect to Azure Resource Manager.-->
+
+## Next steps
+
+[Connect to Azure Resource Manager](azure-stack-edge-gpu-connect-resource-manager.md)
databox-online Azure Stack Edge Gpu System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
The system requirements for the Azure Stack Edge Pro include:
## Supported Edge storage accounts
-The following Edge storage accounts are supported with REST interface of the device. The Edge storage accounts are created on the device. For more information, see [Edge storage accounts](azure-stack-edge-j-series-manage-storage-accounts.md#about-edge-storage-accounts).
+The following Edge storage accounts are supported with REST interface of the device. The Edge storage accounts are created on the device. For more information, see [Edge storage accounts](azure-stack-edge-gpu-manage-storage-accounts.md#about-edge-storage-accounts).
|Type |Storage account |Comments | ||||
databox-online Azure Stack Edge Manage Access Power Connectivity Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-manage-access-power-connectivity-mode.md
Apart from the default fully connected mode, your device can also run in partial
- **Fully connected** - This is the normal default mode in which the device operates. Both the cloud upload and download of data is enabled in this mode. You can use the Azure portal or the local web UI to manage the device. -- **Partially disconnected** ΓÇô In this mode, the device cannot upload or download any share data however can be managed via the Azure portal.
+- **Partially connected** ΓÇô In this mode, the device cannot upload or download any share data however can be managed via the Azure portal.
This mode is typically used when on a metered satellite network and the goal is to minimize network bandwidth consumption. Minimal network consumption may still occur for device monitoring operations.
You can shut down or restart your physical device using the local web UI. We rec
## Next steps -- Learn how to [Manage shares](azure-stack-edge-manage-shares.md).
+- Learn how to [Manage shares](azure-stack-edge-manage-shares.md).
databox-online Azure Stack Edge Migrate Fpga Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
You will now copy data from the source device to the Edge cloud shares and Edge
Follow these steps to sync the data on the Edge cloud shares on your target device:
-1. [Add shares](azure-stack-edge-j-series-manage-shares.md#add-a-share) corresponding to the share names created on the source device. Make sure that while creating shares, **Select blob container** is set to **Use existing** option and then select the container that was used with the previous device.
-1. [Add users](azure-stack-edge-j-series-manage-users.md#add-a-user) that had access to the previous device.
-1. [Refresh the share](azure-stack-edge-j-series-manage-shares.md#refresh-shares) data from Azure. This pulls down all the cloud data from the existing container to the shares.
-1. Recreate the bandwidth schedules to be associated with your shares. See [Add a bandwidth schedule](azure-stack-edge-j-series-manage-bandwidth-schedules.md#add-a-schedule) for detailed steps.
+1. [Add shares](azure-stack-edge-gpu-manage-shares.md#add-a-share) corresponding to the share names created on the source device. Make sure that while creating shares, **Select blob container** is set to **Use existing** option and then select the container that was used with the previous device.
+1. [Add users](azure-stack-edge-gpu-manage-users.md#add-a-user) that had access to the previous device.
+1. [Refresh the share](azure-stack-edge-gpu-manage-shares.md#refresh-shares) data from Azure. This pulls down all the cloud data from the existing container to the shares.
+1. Recreate the bandwidth schedules to be associated with your shares. See [Add a bandwidth schedule](azure-stack-edge-gpu-manage-bandwidth-schedules.md#add-a-schedule) for detailed steps.
### 2. From Edge local shares
After the replacement device is fully configured, enable the device for local st
Follow these steps to recover the data from local shares: 1. [Configure compute on the device](azure-stack-edge-gpu-deploy-configure-compute.md).
-1. Add all the local shares on the target device. See the detailed steps in [Add a local share](azure-stack-edge-j-series-manage-shares.md#add-a-local-share).
+1. Add all the local shares on the target device. See the detailed steps in [Add a local share](azure-stack-edge-gpu-manage-shares.md#add-a-local-share).
1. Accessing the SMB shares on the source device will use the IP addresses whereas on the target device, you'll use device name. See [Connect to an SMB share on Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-shares.md#connect-to-an-smb-share). To connect to NFS shares on the target device, you'll need to use the new IP addresses associated with the device. See [Connect to an NFS share on Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-shares.md#connect-to-an-nfs-share). If you copied over your share data to an intermediate server over SMB/NFS, you can copy this data over to shares on the target device. You can also copy the data over directly from the source device if both the source and the target device are *online*.
databox-online Azure Stack Edge Mini R Configure Vpn Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-configure-vpn-powershell.md
Follow these steps on the local UI of your Azure Stack Edge device.
## Validate data transfer through VPN
-To confirm that VPN is working, copy data to an SMB share. Follow the steps in [Add a share](azure-stack-edge-j-series-manage-shares.md#add-a-share) on your Azure Stack Edge device.
+To confirm that VPN is working, copy data to an SMB share. Follow the steps in [Add a share](azure-stack-edge-gpu-manage-shares.md#add-a-share) on your Azure Stack Edge device.
1. Copy a file, for example \data\pictures\waterfall.jpg to the SMB share that you mounted on your client system. 2. To validate that the data is going through VPN, while the data is being copied:
databox-online Azure Stack Edge Mini R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
The system requirements for the Azure Stack Edge Mini R include:
## Supported Edge storage accounts
-The following Edge storage accounts are supported with REST interface of the device. The Edge storage accounts are created on the device. For more information, see [Edge storage accounts](azure-stack-edge-j-series-manage-storage-accounts.md#about-edge-storage-accounts)
+The following Edge storage accounts are supported with REST interface of the device. The Edge storage accounts are created on the device. For more information, see [Edge storage accounts](azure-stack-edge-gpu-manage-storage-accounts.md#about-edge-storage-accounts)
|Type |Storage account |Comments | ||||
databox-online Azure Stack Edge Pro R Configure Vpn Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-configure-vpn-powershell.md
You'll next configure the VPN on the local web UI of your device.
## Validate data transfer through VPN
-To confirm that VPN is working, copy data to an SMB share. Follow the steps in [Add a share](azure-stack-edge-j-series-manage-shares.md#add-a-share) on your Azure Stack Edge Pro R device.
+To confirm that VPN is working, copy data to an SMB share. Follow the steps in [Add a share](azure-stack-edge-gpu-manage-shares.md#add-a-share) on your Azure Stack Edge Pro R device.
1. Copy a file, for example \data\pictures\waterfall.jpg to the SMB share that you mounted on your client system. 2. Verify that this file shows up in your storage account on the cloud.
databox-online Azure Stack Edge Pro R Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-overview.md
+
+ Title: Microsoft Azure Stack Edge Pro R overview | Microsoft Docs
+description: Describes Azure Stack Edge Pro R devices, a storage solution for military applications that uses a physical device for network-based transfer into Azure.
++++++ Last updated : 02/22/2021+
+#Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro R is and how it works so I can use it to process and transform data before sending to Azure.
++
+# What is the Azure Stack Edge Pro R?
+
+Azure Stack Edge Pro R is rugged, edge computing device designed for use in harsh environments. Azure Stack Edge Pro R is delivered as a hardware-as-a-service solution. Microsoft ships you a cloud-managed device that acts as network storage gateway and has a built-in Graphical Processing Unit (GPU) that enables accelerated AI-inferencing.
+
+This article provides you an overview of the Azure Stack Edge Pro R solution, key capabilities, and the scenarios where you can deploy this device.
++
+## Key capabilities
+
+Azure Stack Edge Pro R has the following capabilities:
+
+|Capability |Description |
+|||
+|Rugged hardware| Rugged server class hardware designed for harsh environments. Device contained in a portable transit case. |
+|Cloud-managed |Device and service are managed via the Azure portal.|
+|Edge compute workloads |Allows analysis, processing, filtering of data. Supports VMs and containerized workloads.|
+|Accelerated AI inferencing| Enabled by an Nvidia T4 GPU.|
+|Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.|
+|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
+|