Updates from: 07/21/2021 03:05:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Previously updated : 06/15/2021 Last updated : 07/20/2021 zone_pivot_groups: b2c-policy-type
This article describes how to enable custom domains in your redirect URLs for Az
## Custom domain overview
-You can enable custom domains for Azure AD B2C by using [Azure Front Door](https://azure.microsoft.com/services/frontdoor/). Azure Front Door is a global, scalable entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. You can render Azure AD B2C content behind Azure Front Door, and then configure an option in Azure Front Door to deliver the content via a custom domain in your application's URL.
+You can enable custom domains for Azure AD B2C by using [Azure Front Door](https://azure.microsoft.com/services/frontdoor/). Azure Front Door is a global entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. You can render Azure AD B2C content behind Azure Front Door, and then configure an option in Azure Front Door to deliver the content via a custom domain in your application's URL.
The following diagram illustrates Azure Front Door integration:
-1. From an application, a user clicks the sign-in button, which takes them to the Azure AD B2C sign-in page. This page specifies a custom domain name.
+1. From an application, a user selects the sign-in button, which takes them to the Azure AD B2C sign-in page. This page specifies a custom domain name.
1. The web browser resolves the custom domain name to the Azure Front Door IP address. During DNS resolution, a canonical name (CNAME) record with a custom domain name points to your Front Door default front-end host (for example, `contoso.azurefd.net`). 1. The traffic addressed to the custom domain (for example, `login.contoso.com`) is routed to the specified Front Door default front-end host (`contoso.azurefd.net`).
-1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `<tenant-name>.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes the [X-Forwarded-Host](../frontdoor/front-door-http-headers-protocol.md) HTTP header. This HTTP header contains the original custom domain name.
+1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `<tenant-name>.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes the original custom domain name.
1. Azure AD B2C responds to the request by displaying the relevant content and the original custom domain. ![Custom domain networking diagram](./media/custom-domain/custom-domain-network-flow.png)
The following diagram illustrates Azure Front Door integration:
When using custom domains, consider the following: - You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits) for Azure Front Door.-- Azure Front Door is a separate Azure service, so additional charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).
+- Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).
- To use Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md), you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows. - After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name). - If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
When using custom domains, consider the following:
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
-## Add a custom domain name to your tenant
+## Step 1. Add a custom domain name to your Azure AD B2C tenant
Follow the guidance for how to [add and validate your custom domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md). After the domain is verified, delete the DNS TXT record you created.
Follow the guidance for how to [add and validate your custom domain in Azure AD]
Verify each subdomain you plan to use. Verifying just the top-level domain isn't sufficient. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*.
-## Create a new Azure Front Door instance
+> [!TIP]
+> You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use App Service domains. To use App Service domains:
+>
+> 1. [Buy a custom domain name](/azure/app-service/manage-custom-dns-buy-domain).
+> 1. [Add your custom domain in Azure AD](../active-directory/fundamentals/add-custom-domain.md).
+> 1. Validate the domain name by [managing custom DNS records](/azure/app-service/manage-custom-dns-buy-domain#manage-custom-dns-records).
+
+## Step 2. Create a new Azure Front Door instance
-Follow the steps for [creating a Front Door for your application](../frontdoor/quickstart-create-front-door.md#create-a-front-door-for-your-application) using the default settings for the frontend host and routing rules.
+Follow these steps to create a Front Door for your Azure AD B2C tenant. For more information, see [creating a Front Door for your application](../frontdoor/quickstart-create-front-door.md#create-a-front-door-for-your-application).
+
-> [!IMPORTANT]
-> For these steps, after you sign in to the Azure portal in step 1, select **Directory + subscription** and choose the directory that contains the Azure subscription youΓÇÖd like to use for Azure Front Door. This should *not* be the directory containing your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Directory + subscription** and choose the directory that contains the Azure subscription youΓÇÖd like to use for Azure Front Door. The directory should *not* be the directory containing your Azure AD B2C tenant.
+1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door**.
+1. In the **Basics** tab of **Create a Front Door** page, enter or select the following information, and then select **Next: Configuration**.
+
+ | Setting | Value |
+ | | |
+ | **Subscription** | Select your Azure subscription. |
+ | **Resource group** | Select an existing resource group, or select **Create new** to create a new one.|
+ | **Resource group location** | Select the location of the resource group. For example, **Central US**. |
+
+### 2.1 Add frontend host
+
+The frontend host is the domain name used by your application. When you create a Front Door, the default frontend host is a subdomain of `azurefd.net`.
+
+Azure Front Door provides the option of associating a custom domain with the frontend host. With this option, you associate the Azure AD B2C user interface with a custom domain in your URL instead of a Front Door owned domain name. For example, https://login.contoso.com.
+
+To add a frontend host, follow these steps:
+
+1. In **Frontends/domains**, select **+** to open **Add a frontend host**.
+1. For **Host name**, enter a globally unique hostname. The host name is not your custom domain. This example uses *contoso-frontend*. Select **Add**.
+
+ ![Add a frontend host screenshot.](./media/custom-domain/add-frontend-host-azure-front-door.png)
+
+### 2.2 Add backend and backend pool
+
+A backend refers to your [Azure AD B2C tenant name](tenant-management.md#get-your-tenant-name), `tenant-name.b2clogin.com`. To add a backend pool, follow these steps:
+
+1. Still in **Create a Front Door**, in **Backend pools**, select **+** to open **Add a backend pool**.
+
+1. Enter a **Name**. For example, *myBackendPool*. Select **Add a backend**.
+
+ The following screenshot demonstrates how to create a backend pool:
+
+ ![Add a frontend backend pool screenshot.](./media/custom-domain/front-door-add-backend-pool.png)
+
+1. In the **Add a backend** blade, select the following information, and then select **Add**.
+
+ | Setting | Value |
+ | | |
+ | **Backend host type**| Select **Custom host**.|
+ | **Backend host name**| Select the name of your [Azure AD B2C](tenant-management.md#get-your-tenant-name), `<tenant-name>.b2clogin.com`. For example, contoso.b2clogin.com.|
+ | **Backend host header**| Select the same value you selected for **Backend host name**.|
+
+ **Leave all other fields default.*
+
+ The following screenshot demonstrates how to create a custom host backend that is associated with an Azure AD B2C tenant:
+
+ ![Add a custom host backend screenshot.](./media/custom-domain/add-a-backend.png)
+
+1. To complete the configuration of the backend pool, on the **Add a backend pool** blade, select **Add**.
+
+1. After you add the **backend** to the **backend pool**, disable the **Health probes**.
+
+ ![Add a backend pool and disable the health probes screenshot.](./media/custom-domain/add-a-backend-pool.png)
+
+### 2.3 Add a routing rule
+
+Finally, add a routing rule. The routing rule maps your frontend host to the backend pool. The rule forwards a request for the [frontend host](#21-add-frontend-host) to the Azure AD B2C [backend](#22-add-backend-and-backend-pool). To add a routing rule, follow these steps:
+
+1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule.
+1. Select **Review + Create**, and then **Create**.
+
+ ![Create Azure Front Door screenshot.](./media/custom-domain/configuration-azure-front-door.png)
++
+## Step 3. Set up your custom domain on Azure Front Door
-In the step **Add a backend**, use the following settings:
+In this step, you add the custom domain you registered in [Step 1](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) to your Front Door.
-* For **Backend host type**, select **Custom host**.
-* For **Backend host name**, select the hostname for your Azure AD B2C endpoint, <tenant-name>.b2clogin.com. For example, contoso.b2clogin.com.
-* For **Backend host header**, select the same value you selected for **Backend host name**.
+### 3.1 Create a CNAME DNS record
-![Add a backend](./media/custom-domain/add-a-backend.png)
+Before you can use a custom domain with your Front Door, you must first create a canonical name (CNAME) record with your domain provider to point to your Front Door's default frontend host (say contoso.azurefd.net).
-After you add the **backend** to the **backend pool**, disable the **Health probes**.
+A CNAME record is a type of DNS record that maps a source domain name to a destination domain name. For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname you configure in [step 2.1](#21-add-frontend-host).
-![Add a backend pool](./media/custom-domain/add-a-backend-pool.png)
+After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as login.contoso.com) is routed to the specified destination Front Door default frontend host, such as `contoso.azurefd.net`. For more information, see [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md).
-## Set up your custom domain on Azure Front Door
+To create a CNAME record for your custom domain:
-Follow the steps to [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md). When creating the `CNAME` record for your custom domain, use the custom domain name you verified earlier in the [Add a custom domain name to your Azure AD](#add-a-custom-domain-name-to-your-tenant) step.
+1. Sign in to the web site of the domain provider for your custom domain.
-After the custom domain name is verified, select **Custom domain name HTTPS**. Then under the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate).
+1. Find the page for managing DNS records by consulting the provider's documentation or searching for areas of the web site labeled **Domain Name**, **DNS**, or **Name Server Management**.
+
+1. Create a CNAME record entry for your custom domain and complete the fields as shown in the following table (field names may vary):
+
+ | Source | Type | Destination |
+ |--|-|--|
+ | `<login.contoso.com>` | CNAME | `contoso.azurefd.net` |
+
+ - Source: Enter your custom domain name (for example, login.contoso.com).
+
+ - Type: Enter *CNAME*.
+
+ - Destination: Enter your default Front Door frontend host you create in [step 2.1](#21-add-frontend-host). It must be in the following format:_&lt;hostname&gt;_.azurefd.net. For example, `contoso.azurefd.net`.
+
+1. Save your changes.
+
+### 3.2 Associate the custom domain with your Front Door
+
+After you've registered your custom domain, you can then add it to your Front Door.
+
+1. On the **Front Door designer** page, select **+** to add a custom domain.
+
+1. For **Frontend host**, the frontend host to use as the destination domain of your CNAME record is pre-filled and is derived from your Front Door: *&lt;default hostname&gt;*.azurefd.net. It cannot be changed.
+
+1. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, login.contoso.com.
+
+1. Select **Add**.
+
+ Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated.
+
+### 3.3 Update the routing rule
+
+1. In the **Routing rules**, select the routing rule you created in [step 2.3](#23-add-a-routing-rule).
+1. Under the **Frontends/domains**, select your custom domain name.
+
+ ![Update the Azure Front Door routing rule screenshot.](./media/custom-domain/update-routing-rule.png)
+
+1. Select **Update**.
+1. From the main window, select **Save**.
+
+### 3.4 Configure HTTPS on a Front Door custom domain
+
+After the custom domain name is verified, select **Custom domain name HTTPS**. Then under the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate). If you choose the *Front Door managed* option, wait until the certificate is fully provisioned.
The following screenshot shows how to add a custom domain and enable HTTPS using an Azure Front Door certificate. ![Set up Azure Front Door custom domain](./media/custom-domain/azure-front-door-add-custom-domain.png)
-## Configure CORS
+
+## Step 4. Configure CORS
If you [customize the Azure AD B2C user interface](customize-ui-with-html.md) with an HTML template, you need to [Configure CORS](customize-ui-with-html.md?pivots=b2c-user-flow.md#3-configure-cors) with your custom domain.
Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin
1. For **Max age**, enter 200. 1. Select **Save**.
+## Test your custom domain
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Under **Policies**, select **User flows (policies)**.
+1. Select a user flow, and then select **Run user flow**.
+1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
+1. Click **Copy to clipboard**.
+
+ ![Copy the authorization request URI](./media/custom-domain/user-flow-run-now.png)
+
+1. In the **Run user flow endpoint** URL, replace the Azure AD B2C domain (<tenant-name>.b2clogin.com) with your custom domain.
+ For example, instead of:
+
+ ```http
+ https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login
+ ```
+
+ use:
+
+ ```http
+ https://login.contoso.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login
+ ```
+1. Select **Run user flow**. Your Azure AD B2C policy should load.
+1. Sign-in with Azure AD B2C local account.
+1. Repeat the test with the rest of your policies.
+ ## Configure your identity provider When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign-in process. The authorization request specifies the `redirect_uri` with the Azure AD B2C default domain name:
The following example shows a valid OAuth redirect URI:
https://login.contoso.com/contoso.onmicrosoft.com/oauth2/authresp ```
-If you choose to use the [tenant ID](#optional-use-tenant-id), a valid OAuth redirect URI would look like the following:
+If you choose to use the [tenant ID](#optional-use-tenant-id), a valid OAuth redirect URI would look like the following sample:
```http https://login.contoso.com/11111111-1111-1111-1111-111111111111/oauth2/authresp ```
-The [SAML identity providers](saml-identity-provider-technical-profile.md) metadata would look like the following:
+The [SAML identity providers](saml-identity-provider-technical-profile.md) metadata would look like the following sample:
```http https://<custom-domain-name>.b2clogin.com/<tenant-name>/<your-policy>/samlp/metadata?idptp=<your-technical-profile> ```
-## Test your custom domain
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
-1. In the Azure portal, search for and select **Azure AD B2C**.
-1. Under **Policies**, select **User flows (policies)**.
-1. Select a user flow, and then select **Run user flow**.
-1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`.
-1. Click **Copy to clipboard**.
-
- ![Copy the authorization request URI](./media/custom-domain/user-flow-run-now.png)
-
-1. In the **Run user flow endpoint** URL, replace the Azure AD B2C domain (<tenant-name>.b2clogin.com) with your custom domain.
- For example, instead of:
-
- ```http
- https://contoso.b2clogin.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login
- ```
-
- use:
-
- ```http
- https://login.contoso.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login
- ```
-1. Select **Run user flow**. Your Azure AD B2C policy should load.
-1. Sign-in with both local and social accounts.
-1. Repeat the test with the rest of your policies.
- ## Configure your application After you configure and test the custom domain, you can update your applications to load the URL that specifies your custom domain as the hostname instead of the Azure AD B2C domain.
-The custom domain integration applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints may look like the following:
+The custom domain integration applies to authentication endpoints that use Azure AD B2C policies (user flows or custom policies) to authenticate users. These endpoints may look like the following sample:
- <code>https://\<custom-domain\>/\<tenant-name\>/<b>\<policy-name\></b>/v2.0/.well-known/openid-configuration</code>
Replace:
- **policy-name** with your policy name. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies).
-The [SAML service provider](./saml-service-provider.md) metadata may look like the following:
+The [SAML service provider](./saml-service-provider.md) metadata may look like the following sample:
```html https://custom-domain-name/tenant-name/policy-name/Samlp/metadata
After you add the custom domain and configure your application, users will still
- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get an HTTP 404 error message. - **Possible causes** - This issue could be related to the DNS configuration or the Azure Front Door backend configuration. - **Resolution**:
- 1. Make sure the custom domain is [registered and successfully verified](#add-a-custom-domain-name-to-your-tenant) in your Azure AD B2C tenant.
+ 1. Make sure the custom domain is [registered and successfully verified](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant.
1. Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso.azurefd.net).
- 1. Make sure the [Azure Front Door backend pool configuration](#set-up-your-custom-domain-on-azure-front-door) points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored.
+ 1. Make sure the [Azure Front Door backend pool configuration](#22-add-backend-and-backend-pool) points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored.
### Identify provider returns an error -- **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity providers presents an error message.
+- **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity provider presents an error message.
- **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI is not yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message. - **Resolution** - Follow the steps in [Configure your identity provider](#configure-your-identity-provider) to add the new redirect URI.
Copy the URL, change the domain name manually, and then paste it back to your br
### Which IP address is presented to Azure AD B2C? The user's IP address, or the Azure Front Door IP address?
-Azure Front Door passes the user's original IP address. This is the IP address that you'll see in the audit reporting or your custom policy.
+Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy.
### Can I use a third-party web application firewall (WAF) with B2C?
-To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows.
+To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows, or custom polies.
## Next steps
active-directory-b2c Display Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/display-controls.md
Previously updated : 12/11/2020 Last updated : 07/20/2021
The **ValidationClaimsExchange** element contains the following element:
| Element | Occurrences | Description | | - | -- | -- |
-| ValidationTechnicalProfile | 1:n | A technical profile to be used for validating some or all of the display claims of the referencing technical profile. |
+| ValidationClaimsExchangeTechnicalProfile | 1:n | A technical profile to be used for validating some or all of the display claims of the referencing technical profile. |
-The **ValidationTechnicalProfile** element contains the following attributes:
+The **ValidationClaimsExchangeTechnicalProfile** element contains the following attribute:
| Attribute | Required | Description | | | -- | -- |
-| ReferenceId | Yes | An identifier of a technical profile already defined in the policy or parent policy. |
-|ContinueOnError|No| Indicates whether validation of any subsequent validation technical profiles should continue if this validation technical profile raises an error. Possible values: `true` or `false` (default, processing of further validation profiles will stop and an error will be returned). |
-|ContinueOnSuccess | No | Indicates whether validation of any subsequent validation profiles should continue if this validation technical profile succeeds. Possible values: `true` or `false`. The default is `true`, meaning that the processing of further validation profiles will continue. |
+| TechnicalProfileReferenceId | Yes | An identifier of a technical profile already defined in the policy or parent policy. |
-The **ValidationTechnicalProfile** element contains the following element:
+The **ValidationClaimsExchangeTechnicalProfile** element contains the following element:
| Element | Occurrences | Description | | - | -- | -- |
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
To enable sign-in for users with an Apple ID in Azure Active Directory B2C (Azur
1. Select **Sign In with Apple**, and then select **Configure**. 1. Select the **Primary App ID** you want to configure Sign in with Apple with. 1. In **Domains and Subdomains**, enter `your-tenant-name.b2clogin.com`. Replace your-tenant-name with the name of your tenant. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`.
- 1. In **Return URLs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain.
+ 1. In **Return URLs**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain. The Return URL needs to be in all lower-case.
1. Select **Next**, and then select **Done**. 1. When the pop-up window is closed, select **Continue**, and then select **Save**.
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/string-transformations.md
Previously updated : 03/08/2021 Last updated : 07/20/2021
Use this claims transformation to parse the domain name after the @ symbol of th
- Output claims: - **domain**: outlook.com
+## SetClaimIfBooleansMatch
+
+Checks that a boolean claim is `true`, or `false`. If yes, sets the output claims with the value present in `outputClaimIfMatched` input parameter.
+
+| Item | TransformationClaimType | Data Type | Notes |
+| - | -- | | -- |
+| InputClaim | claimToMatch | string | The claim type, which is to be checked. Null value throws an exception. |
+| InputParameter | matchTo | string | The value to be compared with `claimToMatch` input claim. Possible values: `true`, or `false`. |
+| InputParameter | outputClaimIfMatched | string | The value to be set if input claim equals to the `matchTo` input parameter. |
+| OutputClaim | outputClaim | string | If the `claimToMatch` input claim equals to the `matchTo` input parameter, this output claim contains the value of `outputClaimIfMatched` input parameter. |
+
+For example, the following claims transformation checks if the value of **hasPromotionCode** claim is equal to `true`. If yes, return the value to *Promotion code not found*.
+
+```xml
+<ClaimsTransformation Id="GeneratePromotionCodeError" TransformationMethod="SetClaimIfBooleansMatch">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="hasPromotionCode" TransformationClaimType="claimToMatch" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="matchTo" DataType="string" Value="true" />
+ <InputParameter Id="outputClaimIfMatched" DataType="string" Value="Promotion code not found." />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="promotionCode" TransformationClaimType="outputClaim" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+### Example
+
+- Input claims:
+ - **claimToMatch**: true
+- Input parameters:
+ - **matchTo**: true
+ - **outputClaimIfMatched**: "Promotion code not found."
+- Output claims:
+ - **outputClaim**: "Promotion code not found."
+ ## SetClaimsIfRegexMatch Checks that a string claim `claimToMatch` and `matchTo` input parameter are equal, and sets the output claims with the value present in `outputClaimIfMatched` input parameter, along with compare result output claim, which is to be set as `true` or `false` based on the result of comparison.
active-directory-domain-services Password Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/password-policy.md
All users, regardless of how they're created, have the following account lockout
* **Account lockout duration:** 30 * **Number of failed logon attempts allowed:** 5
-* **Reset failed logon attempts count after:** 30 minutes
+* **Reset failed logon attempts count after:** 2 minutes
* **Maximum password age (lifetime):** 90 days With these default settings, user accounts are locked out for 30 minutes if five invalid passwords are used within 2 minutes. Accounts are automatically unlocked after 30 minutes.
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Title: 'Azure AD on-premises application provisioning architecture | Microsoft Docs'
-description: Describes overview of on-premises application provisioning architecture.
+description: Presents an overview of on-premises application provisioning architecture.
# Azure AD on-premises application provisioning architecture >[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability (GA).
## Overview
-The following diagram shows an over view of how on-premises application provisioning works.
+The following diagram shows an overview of how on-premises application provisioning works.
-![Architecture](.\media\on-premises-application-provisioning-architecture\arch-3.png)
+![Diagram that shows the architecture for on-premises application provisioning.](.\media\on-premises-application-provisioning-architecture\arch-3.png)
-There are three primary components to provisioning users into an on-premises application.
+There are three primary components to provisioning users into an on-premises application:
-- The Provisioning agent provides connectivity between Azure AD and your on-premises environment.-- The ECMA host converts provisioning requests from Azure AD to requests made to your target application. It serves as a gateway between Azure AD and your application. It allows you to import existing ECMA2 connectors used with Microsoft Identity Manager. Note, the ECMA host is not required if you have built a SCIM application or SCIM gateway.
+- The provisioning agent provides connectivity between Azure Active Directory (Azure AD) and your on-premises environment.
+- The ECMA host converts provisioning requests from Azure AD to requests made to your target application. It serves as a gateway between Azure AD and your application. You can use it to import existing ECMA2 connectors used with Microsoft Identity Manager. The ECMA host isn't required if you've built a SCIM application or SCIM gateway.
- The Azure AD provisioning service serves as the synchronization engine. >[!NOTE]
-> MIM Sync is not required. However, you can use MIM sync to build and test your ECMA connector before importing it into the ECMA host.
+> Microsoft Identity Manager Synchronization isn't required. But you can use it to build and test your ECMA connector before you import it into the ECMA host.
### Firewall requirements
-You do not need to open inbound connections to the corporate network. The provisioning agents only use outbound connections to the provisioning service, which means that there is no need to open firewall ports for incoming connections. You also do not need a perimeter (DMZ) network because all connections are outbound and take place over a secure channel.
+You don't need to open inbound connections to the corporate network. The provisioning agents only use outbound connections to the provisioning service, which means there's no need to open firewall ports for incoming connections. You also don't need a perimeter (DMZ) network because all connections are outbound and take place over a secure channel.
## Agent best practices-- Ensure the auto Azure AD Connect Provisioning Agent Auto Update service is running. It is enabled by default when installing the agent. Auto update is required for Microsoft to support your deployment.
+- Ensure the auto Azure AD Connect Provisioning Agent Auto Update service is running. It's enabled by default when you install the agent. Auto-update is required for Microsoft to support your deployment.
- Avoid all forms of inline inspection on outbound TLS communications between agents and Azure. This type of inline inspection causes degradation to the communication flow.-- The agent has to communicate with both Azure and your application, so the placement of the agent affects the latency of those two connections. You can minimize the latency of the end-to-end traffic by optimizing each network connection. Each connection can be optimized by:--- Reducing the distance between the two ends of the hop.-- Choosing the right network to traverse. For example, traversing a private network rather than the public Internet may be faster, due to dedicated links.
+- The agent must communicate with both Azure and your application, so the placement of the agent affects the latency of those two connections. You can minimize the latency of the end-to-end traffic by optimizing each network connection. Each connection can be optimized by:
+ - Reducing the distance between the two ends of the hop.
+ - Choosing the right network to traverse. For example, traversing a private network rather than the public internet might be faster because of dedicated links.
-## Provisioning Agent questions
-**What is the GA version of the Provisioning Agent?**
+## Provisioning agent questions
+Some common questions are answered here.
-Refer to [Azure AD Connect Provisioning Agent: Version release history](provisioning-agent-release-version-history.md) for the latest GA version of the Provisioning Agent.
+### What is the GA version of the provisioning agent?
-**How do I know the version of my Provisioning Agent?**
+For the latest GA version of the provisioning agent, see [Azure AD connect provisioning agent: Version release history](provisioning-agent-release-version-history.md).
- 1. Sign in to the Windows server where the Provisioning Agent is installed.
- 2. Go to Control Panel -> Uninstall or Change a Program menu
- 3. Look for the version corresponding to the entry Microsoft Azure AD Connect Provisioning Agent
+### How do I know the version of my provisioning agent?
-**Does Microsoft automatically push Provisioning Agent updates?**
+ 1. Sign in to the Windows server where the provisioning agent is installed.
+ 1. Go to **Control Panel** > **Uninstall or Change a Program**.
+ 1. Look for the version that corresponds to the entry for **Microsoft Azure AD Connect Provisioning Agent**.
-Yes, Microsoft automatically updates the provisioning agent if the Windows service Microsoft Azure AD Connect Agent Updater is up and running. Ensuring that your agent is up to date is required for support to troubleshoot issues.
+### Does Microsoft automatically push provisioning agent updates?
-**Can I install the Provisioning Agent on the same server running Azure AD Connect or Microsoft Identity Manager (MIM)?**
+Yes. Microsoft automatically updates the provisioning agent if the Windows service Microsoft Azure AD Connect Agent Updater is up and running. Ensuring that your agent is up to date is required for support to troubleshoot issues.
-Yes, you can install the Provisioning Agent on the same server that runs Azure AD Connect or MIM, but they are not required.
+### Can I install the provisioning agent on the same server running Azure AD Connect or Microsoft Identity Manager?
-**How do I configure the Provisioning Agent to use a proxy server for outbound HTTP communication?**
+Yes. You can install the provisioning agent on the same server that runs Azure AD Connect or Microsoft Identity Manager, but they aren't required.
+
+### How do I configure the provisioning agent to use a proxy server for outbound HTTP communication?
+
+The provisioning agent supports use of outbound proxy. You can configure it by editing the agent config file **C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config**. Add the following lines into it toward the end of the file just before the closing `</configuration>` tag. Replace the variables `[proxy-server]` and `[proxy-port]` with your proxy server name and port values.
-The Provisioning Agent supports use of outbound proxy. You can configure it by editing the agent config file **C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\AADConnectProvisioningAgent.exe.config**. Add the following lines into it, towards the end of the file just before the closing </configuration> tag. Replace the variables [proxy-server] and [proxy-port] with your proxy server name and port values.
``` <system.net> <defaultProxy enabled="true" useDefaultCredentials="true">
The Provisioning Agent supports use of outbound proxy. You can configure it by e
</defaultProxy> </system.net> ```
-**How do I ensure that the Provisioning Agent is able to communicate with the Azure AD tenant and no firewalls are blocking ports required by the agent?**
+### How do I ensure the provisioning agent can communicate with the Azure AD tenant and no firewalls are blocking ports required by the agent?
-You can also check whether all of the required ports are open.
+You can also check whether all the required ports are open.
-**How do I uninstall the Provisioning Agent?**
-1. Sign in to the Windows server where the Provisioning Agent is installed.
-2. Go to Control Panel -> Uninstall or Change a Program menu
-3. Uninstall the following programs:
+### How do I uninstall the provisioning agent?
+1. Sign in to the Windows server where the provisioning agent is installed.
+1. Go to **Control Panel** > **Uninstall or Change a Program**.
+1. Uninstall the following programs:
- Microsoft Azure AD Connect Provisioning Agent - Microsoft Azure AD Connect Agent Updater - Microsoft Azure AD Connect Provisioning Agent Package
-## Next Steps
+## Next steps
- [App provisioning](user-provisioning.md) - [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md)
active-directory On Premises Ecma Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-configure.md
-# Configure the Azure AD ECMA Connector Host and the provisioning agent.
+# Configure the Azure AD ECMA Connector Host and the provisioning agent
>[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
-This article provides guidance on how to configure the Azure AD ECMA Connector Host and the provisioning agent once you have successfully installed them.
+This article provides guidance on how to configure the Azure Active Directory (Azure AD) ECMA Connector Host and the provisioning agent after you've successfully installed them.
-Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+This flow guides you through the process of installing and configuring the Azure AD ECMA Connector Host.
- ![Installation flow](./media/on-premises-ecma-configure/flow-1.png)
+ ![Diagram that shows the installation flow.](./media/on-premises-ecma-configure/flow-1.png)
-For more installation and configuration information see:
+For more installation and configuration information, see:
- [Prerequisites for the Azure AD ECMA Connector Host](on-premises-ecma-prerequisites.md) - [Installation of the Azure AD ECMA Connector Host](on-premises-ecma-install.md)
- - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
+ - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
+
## Configure the Azure AD ECMA Connector Host
-Configuring the Azure AD ECMA Connector Host occurs in 2 parts.
+Configuring the Azure AD ECMA Connector Host occurs in two parts:
- - **Configure the settings** - configure the port and certificate for the Azure AD ECMA Connector Host to use. This is only done the first time the ECMA Connector Host is started.
- - **Create a connector** - create a connector (for example, SQL or LDAP) to allow the Azure AD ECMA Connector Host to export or import data to a data source.
+ - **Configure the settings**: Configure the port and certificate for the Azure AD ECMA Connector Host to use. This step is only done the first time the ECMA Connector Host is started.
+ - **Create a connector**: Create a connector (for example, SQL or LDAP) to allow the Azure AD ECMA Connector Host to export or import data to a data source.
-### Configure the Settings
-When you first start the Azure AD ECMA Connector Host you will see a port number which will already be filled out using the default 8585.
+### Configure the settings
+When you first start the Azure AD ECMA Connector Host, you'll see a port number that's filled with the default **8585**.
- ![Configure your settings](.\media\on-premises-ecma-configure\configure-1.png)
+ ![Screenshot that shows configuring your settings.](.\media\on-premises-ecma-configure\configure-1.png)
-For the preview, you will need to generate a new self-signed certificate.
+For the preview, you'll need to generate a new self-signed certificate.
>[!NOTE]
- >This preview uses a time-sensitive cerfiticate. The auto-generated certificate will be self-signed, part of the trusted root and the SAN matches the hostname.
+ >This preview uses a time-sensitive certificate. The autogenerated certificate will be self-signed. Part of the trusted root and the SAN matches the hostname.
### Create a connector
-Now you must create a connector for the Azure AD ECMA Connector Host to use. This connector will allow the ECMA Connector Host to export (and import if desired) data to the data source for the connector you create.
+Now you must create a connector for the Azure AD ECMA Connector Host to use. This connector will allow the ECMA Connector Host to export data to the data source for the connector you create. You can also use it to import data if you want.
The configuration steps for each of the individual connectors are longer and are provided in their own documents.
-Use one of the links below to create and configure a connector.
+To create and configure a connector, use the [generic SQL connector](on-premises-sql-connector-configure.md). This connector will work with Microsoft SQL databases, such as Azure SQL Database or Azure Database for MySQL.
-- [Generic SQL connector](on-premises-sql-connector-configure.md) - a connector that will work with SQL databases such as Microsoft SQL or MySQL.
+## Establish connectivity between Azure AD and the Azure AD ECMA Connector Host
+The following sections guide you through establishing connectivity with the on-premises Azure AD ECMA Connector Host and Azure AD.
+### Ensure the ECMA2Host service is running
+1. On the server running the Azure AD ECMA Connector Host, select **Start**.
+1. Enter **run**, and enter **services.msc** in the box.
+1. In the **Services** list, ensure that **Microsoft ECMA2Host** is present and running. If not, select **Start**.
-## Establish connectivity between Azure AD and the Azure AD ECMA Connector Host
-The following sections will guide you through establishing connectivity with the on-premises Azure AD ECMA Connector Host and Azure AD.
+ ![Screenshot that shows that the service is running.](.\media\on-premises-ecma-configure\configure-2.png)
-#### Ensure ECMA2Host service is running
-1. On the server the running the Azure AD ECMA Connector Host, click Start.
-2. Type run and enter services.msc in the box
-3. In the services, ensure that **Microsoft ECMA2Host** is present and running. If not, click **Start**.
- ![Service is running](.\media\on-premises-ecma-configure\configure-2.png)
+### Add an enterprise application
+1. Sign in to the Azure portal as an application administrator.
+1. In the portal, go to **Azure Active Directory** > **Enterprise applications**.
+1. Select **New application**.
-#### Add Enterprise application
-1. Sign-in to the Azure portal as an application administrator
-2. In the portal, navigate to Azure Active Directory, **Enterprise Applications**.
-3. Click on **New Application**.
- ![Add new application](.\media\on-premises-ecma-configure\configure-4.png)
-4. Locate the "On-premises provisioning" application from the gallery and click **Create**.
+ ![Screenshot that shows Add new application.](.\media\on-premises-ecma-configure\configure-4.png)
+1. Locate the **On-premises provisioning** application from the gallery, and select **Create**.
### Configure the application and test
- 1. Once it has been created, click he **Provisioning page**.
- 2. Click **get started**.
- ![get started](.\media\on-premises-ecma-configure\configure-6.png)
- 3. On the **Provisioning page**, change the mode to **Automatic**
- ![Change mode](.\media\on-premises-ecma-configure\configure-7.png)
- 4. In the on-premises connectivity section, select the agent that you just deployed and click assign agent(s).
- ![Assign an agent](.\media\on-premises-ecma-configure\configure-8.png)</br>
-
- >[!NOTE]
- >After adding the agent, you need to wait 10-20 minutes for the registration to complete. The connectivity test will not work until the registration completes.
- >
- >Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Navigating to your server > search for services in the windows search bar > identify the Azure AD Connect Provisioning Agent Service > right click on the service and restart.
+ 1. After the application is created, select the **Provisioning** page.
+ 1. Select **Get started**.
+ ![Screenshot that shows Get started.](.\media\on-premises-ecma-configure\configure-6.png)
+ 1. On the **Provisioning** page, change **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot that shows changing the mode.](.\media\on-premises-ecma-configure\configure-7.png)
+ 1. In the **On-Premises Connectivity** section, select the agent that you deployed and select **Assign Agent(s)**.
+
+ ![Screenshot that shows Assign an agent.](.\media\on-premises-ecma-configure\configure-8.png)</br>
+
+ >[!NOTE]
+ >After you add the agent, wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes.
+ >
+ >Alternatively, you can force the agent registration to complete by restarting the provisioning agent on your server. Go to your server, search for **services** in the Windows search bar, identify the **Azure AD Connect Provisioning Agent Service**, right-click the service, and restart.
- 5. After 10 minutes, under the **Admin credentials** section, enter the following URL, replacing "connectorName" portion with the name of the connector on the ECMA Host.
+ 1. After 10 minutes, under the **Admin Credentials** section, enter the following URL. Replace the `"connectorName"` portion with the name of the connector on the ECMA Host.
|Property|Value| |--|--| |Tenant URL|https://localhost:8585/ecma2host_connectorName/scim|
- 6. Enter the secret token value that you defined when creating the connector.
- 7. Click Test Connection and wait one minute.
- ![Test the connection](.\media\on-premises-ecma-configure\configure-5.png)
+ 1. Enter the secret token value that you defined when you created the connector.
+ 1. Select **Test Connection** and wait one minute.
+
+ ![Screenshot that shows Test Connection.](.\media\on-premises-ecma-configure\configure-5.png)
>[!NOTE]
- >Be sure to wait 10-20 minutes after assigning the agent to test the connection. The connection will fail if registration has not completed.
- 8. Once connection test is successful, click **save**.</br>
- ![Successful test](.\media\on-premises-ecma-configure\configure-9.png)
+ >Be sure to wait 10 to 20 minutes after you assign the agent to test the connection. The connection will fail if registration hasn't finished.
+
+ 1. After the connection test is successful, select **Save**.</br>
+
+ ![Screenshot that shows Successful test.](.\media\on-premises-ecma-configure\configure-9.png)
-## Configure who is in scope for provisioning
-Now that you have the Azure AD ECMA Connector Host talking with Azure AD you can move on to configuring who is in scope for provisioning. The sections below will provide information on how scope your users.
+## Configure who's in scope for provisioning
+Now that you have the Azure AD ECMA Connector Host talking with Azure AD, you can move on to configuring who's in scope for provisioning. The following sections provide information on how to scope your users.
### Assign users to your application
-Azure AD allows you to scope who should be provisioned based on assignment to an application and / or by filtering on a particular attribute. Determine who should be in scope for provisioning and define your scoping rules as necessary. For more information, see [Manage user assignment for an app in Azure Active Directory](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+By using Azure AD, you can scope who should be provisioned based on assignment to an application or by filtering on a particular attribute. Determine who should be in scope for provisioning, and define your scoping rules, as necessary. For more information, see [Manage user assignment for an app in Azure Active Directory](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
### Configure your attribute mappings
-You will need to map the user attributes in Azure AD to the attributes in the target application. The Azure AD Provisioning service relies on the SCIM standard for provisioning and as a result, the attributes surfaced have the SCIM name space. The example below shows how you can map the mail and objectId attributes in Azure AD to the Email and InternalGUID attributes in an application.
+Now you map the user attributes in Azure AD to the attributes in the target application. The Azure AD provisioning service relies on the SCIM standard for provisioning. As a result, the attributes surfaced have the SCIM name space. The following example shows how you can map the **mail** and **objectId** attributes in Azure AD to the **Email** and **InternalGUID** attributes in an application.
>[!NOTE]
->The default mapping contains userPrincipalName to an attribute name PLACEHOLDER. You will need to change the PLACEHOLDER attribute to one that is found in your application. For more information, see [Matching users in the source and target systems](customize-application-attributes.md#matching-users-in-the-source-and-target--systems).
+>The default mapping connects **userPrincipalName** to an attribute name *PLACEHOLDER*. You must change the *PLACEHOLDER* attribute to one that's found in your application. For more information, see [Matching users in the source and target systems](customize-application-attributes.md#matching-users-in-the-source-and-target--systems).
|Attribute name in Azure AD|Attribute name in SCIM|Attribute name in target application| |--|--|--| |mail|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:Email|Email| |objectId|urn:ietf:params:scim:schemas:extension:ECMA2Host:2.0:User:InternalGUID|InternalGUID|
-#### Configure attribute mapping
- 1. In the Azure AD portal, under **Enterprise applications**, click he **Provisioning page**.
- 2. Click **get started**.
- 3. Expand **Mappings** and click **Provision Azure Active Directory Users**
- ![provision a user](.\media\on-premises-ecma-configure\configure-10.png)
- 4. Click **Add new mapping**
- ![Add a mapping](.\media\on-premises-ecma-configure\configure-11.png)
- 5. Specify the source and target attributes and click **OK**.</br>
- ![Edit attributes](.\media\on-premises-ecma-configure\configure-12.png)
+### Configure attribute mapping
+ 1. In the Azure AD portal, under **Enterprise applications**, select the **Provisioning** page.
+ 2. Select **Get started**.
+ 3. Expand **Mappings**, and select **Provision Azure Active Directory Users**.
+
+ ![Screenshot that shows Provision Azure Active Directory Users.](.\media\on-premises-ecma-configure\configure-10.png)
+ 1. Select **Add New Mapping**.
+
+ ![Screenshot that shows Add New Mapping.](.\media\on-premises-ecma-configure\configure-11.png)
+ 1. Specify the source and target attributes, and select **OK**.</br>
+
+ ![Screenshot that shows the Edit Attribute pane.](.\media\on-premises-ecma-configure\configure-12.png)
For more information on mapping user attributes from applications to Azure AD, see [Tutorial - Customize user provisioning attribute-mappings for SaaS applications in Azure Active Directory](customize-application-attributes.md). ### Test your configuration by provisioning users on demand
-To test your configuration, you can use on-demand provisioning of user. For information on provisioning users on-demand see [On-demand provisioning](provision-on-demand.md).
+To test your configuration, you can use on-demand provisioning of users. For information on provisioning users on-demand, see [On-demand provisioning](provision-on-demand.md).
- 1. Navigate to the single sign-on blade and then back to the provisioning blade. From the new provisioning overview blade, click on on-demand.
- 2. Test provisioning a few users on-demand as described [here](provision-on-demand.md).
- ![Test provisioning](.\media\on-premises-ecma-configure\configure-13.png)
+ 1. Go to the single sign-on pane, and then go back to the provisioning pane. On the new provisioning overview pane, select **On-demand**.
+ 1. Test provisioning a few users on demand as described in [On-demand provisioning in Azure Active Directory](provision-on-demand.md).
+
+ ![Screenshot that shows testing provisioning.](.\media\on-premises-ecma-configure\configure-13.png)
### Start provisioning users
- 1. Once on-demand provisioning is successful, change back to the provisioning configuration page. Ensure that the scope is set to only assigned users and group, turn **provisioning On**, and click **Save**.
- ![Start provisioning](.\media\on-premises-ecma-configure\configure-14.png)
- 2. Wait several minutes for provisioning to start (it may take up to 40 minutes). You can learn more about the provisioning service performance here. After the provisioning job has been completed, as described in the next section, you can change the provisioning status to Off, and click Save. This will stop the provisioning service from running in the future.
+ 1. After on-demand provisioning is successful, go back to the provisioning configuration page. Ensure that the scope is set to only assigned users and groups, turn the provisioning status to **On**, and select **Save**.
+
+ ![Screenshot that shows starting provisioning.](.\media\on-premises-ecma-configure\configure-14.png)
-### Verify users have been successfully provisioned
+1. Wait several minutes for provisioning to start. It might take up to 40 minutes. After the provisioning job has completed, as described in the next section, you can change the provisioning status to **Off**, and select **Save**. This step will stop the provisioning service from running in the future.
+
+### Verify users were successfully provisioned
After waiting, check your data source to see if new users are being provisioned.
- ![Verify users are provisioned](.\media\on-premises-ecma-configure\configure-15.png)
+
+ ![Screenshot that shows verifying that users are provisioned.](.\media\on-premises-ecma-configure\configure-15.png)
## Monitor your deployment
-1. Use the provisioning logs to determine which users have been provisioned successfully or unsuccessfully.
-2. Build custom alerts, dashboards, and queries using the Azure Monitor integration.
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states here.
+1. Use the provisioning logs to determine which users were provisioned successfully or unsuccessfully.
+1. Build custom alerts, dashboards, and queries by using the Azure Monitor integration.
+1. If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about [quarantine states](https://github.com/MicrosoftDocs/azure-docs-pr/compare/application-provisioning-quarantine-status.md?expand=1).
-## Next Steps
+## Next steps
- [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md) - [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md)-- [Generic SQL Connector](on-premises-sql-connector-configure.md)
+- [Generic SQL connector](on-premises-sql-connector-configure.md)
active-directory On Premises Ecma Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-install.md
# Installation of the Azure AD ECMA Connector Host >[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
-The Azure AD ECMA Connector Host is included and part of the Azure AD Connect Provisioning Agent Package. The provisioning agent and Azure AD ECMA Connector Host are two separate windows services that are installed using one installer, deployed on the same machine.
+The Azure Active Directory (Azure AD) ECMA Connector Host is included as part of the Azure AD Connect Provisioning Agent Package. The provisioning agent and Azure AD ECMA Connector Host are two separate Windows services. They're installed by using one installer, which is deployed on the same machine.
-Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+This flow guides you through the process of installing and configuring the Azure AD ECMA Connector Host.
- ![Installation flow](./media/on-premises-ecma-install/flow-1.png)
+ ![Diagram that shows the installation flow.](./media/on-premises-ecma-install/flow-1.png)
+
+For more installation and configuration information, see:
-For more installation and configuration information see:
- [Prerequisites for the Azure AD ECMA Connector Host](on-premises-ecma-prerequisites.md) - [Configure the Azure AD ECMA Connector Host and the provisioning agent](on-premises-ecma-configure.md)
- - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
-
+ - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
## Download and install the Azure AD Connect Provisioning Agent Package
- 1. Sign into the Azure portal
- 2. Navigate to enterprise applications > Add a new application
- 3. Search for the "On-premises provisioning" application and add it to your tenant image
- 4. Navigate to the provisioning blade
- 5. Click on on-premises connectivity
- 6. Download the agent installer
- 7. Run the Azure AD Connect provisioning installer AADConnectProvisioningAgentSetup.msi.
- 8. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms and select **Install**.
- ![Microsoft Azure AD Connect Provisioning Agent Package screen](media/on-premises-ecma-install/install-1.png)</br>
- 9. After this operation finishes, the configuration wizard starts. Click **Next**.
- ![Welcome screen](media/on-premises-ecma-install/install-2.png)</br>
- 10. On the **Select Extension** screen, select **On-premises application provisioning (Azure AD to application)** and click **Next**.
- ![Select extension](media/on-premises-ecma-install/install-3.png)</br>
- 12. Use your global administrator account and sign in to Azure AD.
- ![Azure signin](media/on-premises-ecma-install/install-4.png)</br>
- 13. On the **Agent Configuration** screen, click **Confirm**.
- ![Confirm installation](media/on-premises-ecma-install/install-5.png)</br>
- 14. Once the installation is complete, you should see a message at the bottom of the wizard. Click **Finish**.
- ![Click finish](media/on-premises-ecma-install/install-6.png)</br>
- 15. Click **Close**.
-
-Now that the agent package has been successfully installed, you will need to configure the Azure AD ECMA Connector Host and create or import connectors.
-## Next Steps
+ 1. Sign in to the Azure portal.
+ 1. Go to **Enterprise applications** > **Add a new application**.
+ 1. Search for the **On-premises provisioning** application, and add it to your tenant image.
+ 1. Go to the **Provisioning** pane.
+ 1. Select **On-premises connectivity**.
+ 1. Download the agent installer.
+ 1. Run the Azure AD Connect provisioning installer **AADConnectProvisioningAgentSetup.msi**.
+ 1. On the **Microsoft Azure AD Connect Provisioning Agent Package** screen, accept the licensing terms, and select **Install**.
+
+ ![Microsoft Azure AD Connect Provisioning Agent Package screen.](media/on-premises-ecma-install/install-1.png)</br>
+ 1. After this operation finishes, the configuration wizard starts. Select **Next**.
+
+ ![Screenshot that shows the Welcome screen.](media/on-premises-ecma-install/install-2.png)</br>
+
+ 1. On the **Select Extension** screen, select **On-premises application provisioning (Azure AD to application)**. Select **Next**.
+
+ ![Screenshot that shows Select extension.](media/on-premises-ecma-install/install-3.png)</br>
+ 1. Use your global administrator account to sign in to Azure AD.
+
+ ![Screenshot that shows Azure sign-in.](media/on-premises-ecma-install/install-4.png)</br>
+ 1. On the **Agent configuration** screen, select **Confirm**.
+
+ ![Screenshot that shows Confirm installation.](media/on-premises-ecma-install/install-5.png)</br>
+ 1. After the installation is complete, you should see a message at the bottom of the wizard. Select **Exit**.
+
+ ![Screenshot that shows finishing.](media/on-premises-ecma-install/install-6.png)</br>
+
+Now that the agent package has been successfully installed, you need to configure the Azure AD ECMA Connector Host and create or import connectors.
+
+## Next steps
- [Azure AD ECMA Connector Host prerequisites](on-premises-ecma-prerequisites.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)-- [Generic SQL Connector](on-premises-sql-connector-configure.md)
+- [Generic SQL connector](on-premises-sql-connector-configure.md)
active-directory On Premises Ecma Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-prerequisites.md
# Prerequisites for the Azure AD ECMA Connector Host >[!IMPORTANT]
-> The on-premises provisioning preview is currently in an invitation-only preview. You can request access to the capability [here](https://aka.ms/onpremprovisioningpublicpreviewaccess). We will open the preview to more customers and connectors over the next few months as we prepare for general availability.
+> The on-premises provisioning preview is currently in an invitation-only preview. To request access to the capability, use the [access request form](https://aka.ms/onpremprovisioningpublicpreviewaccess). We'll open the preview to more customers and connectors over the next few months as we prepare for general availability.
-This article provides guidance on the prerequisites that are needed for using the Azure AD ECMA Connector Host.
+This article provides guidance on the prerequisites that are needed for using the Azure Active Directory (Azure AD) ECMA Connector Host.
-Installing and configuring the Azure AD ECMA Connector Host is a process. Use the flow below to guide you through the process.
+This flow guides you through the process of installing and configuring the Azure AD ECMA Connector Host.
- ![Installation flow](./media/on-premises-ecma-prerequisites/flow-1.png)
+ ![Diagram that shows the installation flow.](./media/on-premises-ecma-prerequisites/flow-1.png)
For more installation and configuration information, see:+ - [Installation of the Azure AD ECMA Connector Host](on-premises-ecma-install.md) - [Configure the Azure AD ECMA Connector Host and the provisioning agent](on-premises-ecma-configure.md) - [Azure AD ECMA Connector Host generic SQL connector configuration](on-premises-sql-connector-configure.md)
-## On-premises pre-requisites
+## On-premises prerequisites
+ - A target system, such as a SQL database, in which users can be created, updated, and deleted.
+ - An ECMA 2.0 or later connector for that target system, which supports export, schema retrieval, and optionally full import or delta import operations. If you don't have an ECMA connector ready during configuration, you can validate the end-to-end flow if you have a SQL Server instance in your environment and use the generic SQL connector.
+ - A Windows Server 2016 or later computer with an internet-accessible TCP/IP address, connectivity to the target system, and with outbound connectivity to login.microsoftonline.com. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy. The server should have at least 3 GB of RAM.
+ - A computer with .NET Framework 4.7.1.
## Cloud requirements - An Azure AD tenant with Azure AD Premium P1 or Premium P2 (or EMS E3 or E5).
+
[!INCLUDE [active-directory-p1-license.md](../../../includes/active-directory-p1-license.md)]
+ - The Hybrid Administrator role for configuring the provisioning agent and the Application Administrator or Cloud Administrator roles for configuring provisioning in the Azure portal.
--
-## Next Steps
+## Next steps
- [Azure AD ECMA Connector Host installation](on-premises-ecma-install.md) - [Azure AD ECMA Connector Host configuration](on-premises-ecma-configure.md)-- [Generic SQL Connector](on-premises-sql-connector-configure.md)-- [Tutorial: ECMA Connector Host Generic SQL Connector](tutorial-ecma-sql-connector.md)
+- [Generic SQL connector](on-premises-sql-connector-configure.md)
+- [Tutorial - ECMA Connector Host generic SQL connector](tutorial-ecma-sql-connector.md)
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Cloud apps, actions, and authentication context are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, actions, or authentication context. - Administrators can choose from the list of applications that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../app-proxy/what-is-application-proxy.md).-- Administrators may choose to define policy not based on a cloud application but on a [user action](#user-actions) like **Register security information** or **Register or join devices (Preview)**, allowing Conditional Access to enforce controls around those actions.
+- Administrators may choose to define policy not based on a cloud application but on a [user action](#user-actions) like **Register security information** or **Register or join devices**, allowing Conditional Access to enforce controls around those actions.
- Administrators can use [authentication context](#authentication-context-preview) to provide an extra layer of security in applications. ![Define a Conditional Access policy and specify cloud apps](./media/concept-conditional-access-cloud-apps/conditional-access-cloud-apps-or-actions.png)
User actions are tasks that can be performed by a user. Currently, Conditional A
- **Register security information**: This user action allows Conditional Access policy to enforce when users who are enabled for combined registration attempt to register their security information. More information can be found in the article, [Combined security information registration](../authentication/concept-registration-mfa-sspr-combined.md). -- **Register or join devices (preview)**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multi-factor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action:
+- **Register or join devices**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multi-factor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action:
- `Require multi-factor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration.
- - `Client apps` and `Device state` conditions aren't available with this user action since they're dependent on Azure AD device registration to enforce Conditional Access policies.
+ - `Client apps`, `Filters for devices` and `Device state` conditions aren't available with this user action since they're dependent on Azure AD device registration to enforce Conditional Access policies.
- When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings). ## Authentication context (Preview)
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-saml-claims-customization.md
Previously updated : 05/10/2021 Last updated : 07/20/2021
You can use the following functions to transform claims.
If you need additional transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=160599) under the *SaaS application* category.
+## Add the UPN claim to SAML tokens
+
+The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#table-2-saml-restricted-claim-set), so you can not add it in the **User Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
+
+Open the app in **App registrations** and select **Token configuration** and then **Add optional claim**. Select the **SAML** token type, choose **upn** from the list, and click **Add** to get the claim in the token.
++ ## Emitting claims based on conditions You can specify the source of a claim based on user type and the group to which the user belongs.
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
Every app that uses Azure Active Directory (Azure AD) for authentication must be
Once registered, under **Manage**, select **Authentication** > **Implicit grant and hybrid flows**. Select **Access tokens** and **ID tokens**, and then select **Save**.
+> Note: if you're using .NET 6 or later then you don't need to use Implicit grant. The latest template uses MSAL Browser 2.0 and supports Auth Code Flow with PKCE
+ ## Create the app using the .NET Core CLI To create the app you need the latest Blazor templates. You can install them for the .NET Core CLI with the following command: ```dotnetcli
-dotnet new -i Microsoft.Identity.Web.ProjectTemplates::1.6.0
+dotnet new -i Microsoft.Identity.Web.ProjectTemplates::1.9.1
``` Then run the following command to create the application. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Resolution:
Reason(s): - UserΓÇÖs UPN is not in expected format. > [!NOTE]
-> - For AADJ devices the UPN is the text entered by the user in the LoginUI.
-> - For Hybrid Joined devices the UPN is returned from the domain controller during the login process.
+> - For Azure AD joined devices, the UPN is the text entered by the user in the LoginUI.
+> - For Hybrid Azure AD joined devices, the UPN is returned from the domain controller during the login process.
Resolution: - UserΓÇÖs UPN should be in the Internet-style login name, based on the Internet standard [RFC 822](https://www.ietf.org/rfc/rfc0822.txt). Event 1144 (AAD analytic logs) will contain the UPN provided.
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-bulk-download.md
To download the list of users from the Azure AD admin center, you must be signed
4. On the **Download users** page, select **Start** to receive a CSV file listing user profile properties. If there are errors, you can download and view the results file on the Bulk operation results page. The file contains the reason for each error. ![Select where you want the list the users you want to download](./media/users-bulk-download/bulk-download.png)-
- The download file will contain the filtered list of users.
+
+>[!NOTE]
+>The download file will contain the filtered list of users based on the scope of the filters applied.
The following user attributes are included:
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-applications.md
The following are links to useful resources:
* Github Azure AD toolkit - [https://github.com/microsoft/AzureADToolkit](https://github.com/microsoft/AzureADToolkit)
-* Azure Key Vault security overview and security guidance - [Azure Key Vault security overview](../../key-vault/general/security-overview.md), [Secure access to a key vault](../../key-vault/general/secure-your-key-vault.md)
+* Azure Key Vault security overview and security guidance - [Azure Key Vault security overview](../../key-vault/general/security-features.md)
* Solorgate risk information and tools - [Azure AD workbook to help you access Solorigate risk](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718)
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
Set-ADSyncPasswordHashSyncPermissions -ADConnectorAccountDN <ADAccountDN>
Make sure to replace `<ADAccountName>`, `<ADDomainName>` and `<ADAccountDN>` with the proper values for your environment.
-In case you donΓÇÖt want to modify permissions on the AdminSDHolder container, use the switch `-SkipAdminSdHolders`.
+In case you want to modify permissions on the AdminSDHolder container, use the switch `-IncludeAdminSdHolders`. Note that this is not recommended.
By default, all the set permissions cmdlets will try to set AD DS permissions on the root of each Domain in the Forest, meaning that the user running the PowerShell session requires Domain Administrator rights on each domain in the Forest. Because of this requirement, it is recommended to use an Enterprise Administrator from the Forest root. If your Azure AD Connect deployment has multiple AD DS Connectors, it will be required to run the same cmdlet on each forest that has an AD DS Connector.
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
The following table lists requirements for using Azure AD Connect Health.
| TLS inspection for outbound traffic is filtered or disabled. | The agent registration step or data upload operations might fail if there's TLS inspection or termination for outbound traffic at the network layer. For more information, see [Set up TLS inspection](/previous-versions/tn-archive/ee796230(v=technet.10)). | | Firewall ports on the server are running the agent. |The agent requires the following firewall ports to be open so that it can communicate with the Azure AD Connect Health service endpoints: <br /><li>TCP port 443</li><li>TCP port 5671</li> <br />The latest version of the agent doesn't require port 5671. Upgrade to the latest version so that only port 443 is required. For more information, see [Hybrid identity required ports and protocols](./reference-connect-ports.md). | | If Internet Explorer enhanced security is enabled, allow specified websites. |If Internet Explorer enhanced security is enabled, then allow the following websites on the server where you install the agent:<br /><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com</li><li>https:\//login.windows.net</li><li>https:\//aadcdn.msftauth.net</li><li>The federation server for your organization that's trusted by Azure AD (for example, https:\//sts.contoso.com)</li> <br />For more information, see [How to configure Internet Explorer](https://support.microsoft.com/help/815141/internet-explorer-enhanced-security-configuration-changes-the-browsing). If you have a proxy in your network, then see the note that appears at the end of this table.|
-| PowerShell version 4.0 or newer is installed. | Windows Server 2012 includes PowerShell version 3.0. This version is *not* sufficient for the agent.</br></br> Windows Server 2012 R2 and later include a sufficiently recent version of PowerShell.|
+| PowerShell version 5.0 or newer is installed. | Windows Server 2016 includes PowerShell version 5.0.
|FIPS (Federal Information Processing Standard) is disabled.|Azure AD Connect Health agents don't support FIPS.| > [!IMPORTANT]
Check out the following related articles:
* [Using Azure AD Connect Health for Sync](how-to-connect-health-sync.md) * [Using Azure AD Connect Health with Azure AD DS](how-to-connect-health-adds.md) * [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
-* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
+* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-custom.md
On the **Express Settings** page, select **Customize** to start a customized-set
- [Sync](#sync-pages) ### Install required components
-When you install the synchronization services, you can leave the optional configuration section unselected. Azure AD Connect sets up everything automatically. It sets up a SQL Server 2012 Express LocalDB instance, creates the appropriate groups, and assign permissions. If you want to change the defaults, clear the appropriate boxes. The following table summarizes these options and provides links to additional information.
+When you install the synchronization services, you can leave the optional configuration section unselected. Azure AD Connect sets up everything automatically. It sets up a SQL Server 2019 Express LocalDB instance, creates the appropriate groups, and assign permissions. If you want to change the defaults, clear the appropriate boxes. The following table summarizes these options and provides links to additional information.
![Screenshot showing optional selections for the required installation components in Azure AD Connect.](./media/how-to-connect-install-custom/requiredcomponents2.png)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
Before you install Azure AD Connect, there are a few things that you need.
### On-premises Active Directory * The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met.
-* If you plan to use the feature *password writeback*, the domain controllers must be on Windows Server 2012 or later.
+* If you plan to use the feature *password writeback*, the domain controllers must be on Windows Server 2016 or later.
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects. * Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*. * We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites -- Azure AD Connect must be installed on a domain-joined Windows Server 2012 or later.
+- Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later.
- Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration.
We recommend that you harden your Azure AD Connect server to decrease the securi
### SQL Server used by Azure AD Connect
-* Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2012 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
+* Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
* If you use a different installation of SQL Server, these requirements apply: * Azure AD Connect supports all versions of SQL Server from 2012 (with the latest service pack) to SQL Server 2019. Azure SQL Database *isn't supported* as a database. * You must use a case-insensitive SQL collation. These collations are identified with a \_CI_ in their name. Using a case-sensitive collation identified by \_CS_ in their name *isn't supported*.
We recommend that you harden your Azure AD Connect server to decrease the securi
* If you have firewalls on your intranet and you need to open ports between the Azure AD Connect servers and your domain controllers, see [Azure AD Connect ports](reference-connect-ports.md) for more information. * If your proxy or firewall limit which URLs can be accessed, the URLs documented in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) must be opened. Also see [Safelist the Azure portal URLs on your firewall or proxy server](../../azure-portal/azure-portal-safelist-urls.md?tabs=public-cloud). * If you're using the Microsoft cloud in Germany or the Microsoft Azure Government cloud, see [Azure AD Connect sync service instances considerations](reference-connect-instances.md) for URLs.
-* Azure AD Connect (version 1.1.614.0 and after) by default uses TLS 1.2 for encrypting communication between the sync engine and Azure AD. If TLS 1.2 isn't available on the underlying operating system, Azure AD Connect incrementally falls back to older protocols (TLS 1.1 and TLS 1.0).
+* Azure AD Connect (version 1.1.614.0 and after) by default uses TLS 1.2 for encrypting communication between the sync engine and Azure AD. If TLS 1.2 isn't available on the underlying operating system, Azure AD Connect incrementally falls back to older protocols (TLS 1.1 and TLS 1.0). From Azure AD Connect version 2.0 onwards. TLS 1.0 and 1.1 are no longer supported and installation will fail if TLS 1.2 is not available.
* Prior to version 1.1.614.0, Azure AD Connect by default uses TLS 1.0 for encrypting communication between the sync engine and Azure AD. To change to TLS 1.2, follow the steps in [Enable TLS 1.2 for Azure AD Connect](#enable-tls-12-for-azure-ad-connect). * If you're using an outbound proxy for connecting to the internet, the following setting in the **C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config** file must be added for the installation wizard and Azure AD Connect sync to be able to connect to the internet and Azure AD. This text must be entered at the bottom of the file. In this code, *&lt;PROXYADDRESS&gt;* represents the actual proxy IP address or host name.
Optional: Use a test user account to verify synchronization.
## Component prerequisites ### PowerShell and .NET Framework
-Azure AD Connect depends on Microsoft PowerShell and .NET Framework 4.5.1. You need this version or a later version installed on your server. Depending on your Windows Server version, take the following actions:
-
-* Windows Server 2012 R2
- * Microsoft PowerShell is installed by default. No action is required.
- * .NET Framework 4.5.1 and later releases are offered through Windows Update. Make sure you've installed the latest updates to Windows Server in Control Panel.
-* Windows Server 2012
- * The latest version of Microsoft PowerShell is available in Windows Management Framework 4.0, available on the [Microsoft Download Center](https://www.microsoft.com/downloads).
- * .NET Framework 4.5.1 and later releases are available on the [Microsoft Download Center](https://www.microsoft.com/downloads).
-
+Azure AD Connect depends on Microsoft PowerShell 5.0 and .NET Framework 4.5.1. You need this version or a later version installed on your server.
### Enable TLS 1.2 for Azure AD Connect Prior to version 1.1.614.0, Azure AD Connect by default uses TLS 1.0 for encrypting the communication between the sync engine server and Azure AD. You can configure .NET applications to use TLS 1.2 by default on the server. For more information about TLS 1.2, see [Microsoft Security Advisory 2960358](/security-updates/SecurityAdvisories/2015/2960358).
When you use Azure AD Connect to deploy AD FS or the Web Application Proxy (WAP)
Azure AD Connect installs the following components on the server where Azure AD Connect is installed. This list is for a basic Express installation. If you choose to use a different SQL Server on the **Install synchronization services** page, SQL Express LocalDB isn't installed locally. * Azure AD Connect Health
-* Microsoft SQL Server 2012 Command Line Utilities
-* Microsoft SQL Server 2012 Express LocalDB
-* Microsoft SQL Server 2012 Native Client
-* Microsoft Visual C++ 2013 Redistribution Package
+* Microsoft SQL Server 2019 Command Line Utilities
+* Microsoft SQL Server 2019 Express LocalDB
+* Microsoft SQL Server 2019 Native Client
+* Microsoft Visual C++ 14 Redistribution Package
## Hardware requirements for Azure AD Connect The following table shows the minimum requirements for the Azure AD Connect sync computer.
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
### In your on-premises environment
-1. Identify a server running Windows Server 2012 R2 or later to run Azure AD Connect. If not enabled already, [enable TLS 1.2 on the server](./how-to-connect-install-prerequisites.md#enable-tls-12-for-azure-ad-connect). Add the server to the same Active Directory forest as the users whose passwords you need to validate. It should be noted that installation of Pass-Through Authentication agent on Windows Server Core versions is not supported.
+1. Identify a server running Windows Server 2016 or later to run Azure AD Connect. If not enabled already, [enable TLS 1.2 on the server](./how-to-connect-install-prerequisites.md#enable-tls-12-for-azure-ad-connect). Add the server to the same Active Directory forest as the users whose passwords you need to validate. It should be noted that installation of Pass-Through Authentication agent on Windows Server Core versions is not supported.
2. Install the [latest version of Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) on the server identified in the preceding step. If you already have Azure AD Connect running, ensure that the version is 1.1.750.0 or later. >[!NOTE] >Azure AD Connect versions 1.1.557.0, 1.1.558.0, 1.1.561.0, and 1.1.614.0 have a problem related to password hash synchronization. If you _don't_ intend to use password hash synchronization in conjunction with Pass-through Authentication, read the [Azure AD Connect release notes](./reference-connect-version-history.md).
-3. Identify one or more additional servers (running Windows Server 2012 R2 or later, with TLS 1.2 enabled) where you can run standalone Authentication Agents. These additional servers are needed to ensure the high availability of requests to sign in. Add the servers to the same Active Directory forest as the users whose passwords you need to validate.
+3. Identify one or more additional servers (running Windows Server 2016 or later, with TLS 1.2 enabled) where you can run standalone Authentication Agents. These additional servers are needed to ensure the high availability of requests to sign in. Add the servers to the same Active Directory forest as the users whose passwords you need to validate.
>[!IMPORTANT] >In production environments, we recommend that you have a minimum of 3 Authentication Agents running on your tenant. There is a system limit of 40 Authentication Agents per tenant. And as best practice, treat all servers running Authentication Agents as Tier 0 systems (see [reference](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material)).
active-directory How To Dirsync Upgrade Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-dirsync-upgrade-get-started.md
When you install Azure AD Connect on a new server, the assumption is that you wa
5. Select the settings file that exported from your DirSync installation. 6. Configure any advanced options including: * A custom installation location for Azure AD Connect.
- * An existing instance of SQL Server (Default: Azure AD Connect installs SQL Server 2012 Express). Do not use the same database instance as your DirSync server.
+ * An existing instance of SQL Server (Default: Azure AD Connect installs SQL Server 2019 Express). Do not use the same database instance as your DirSync server.
* A service account used to connect to SQL Server (If your SQL Server database is remote then this account must be a domain service account). These options can be seen on this screen: ![Screenshot that shows the advance configuration options for upgrading from DirSync.](./media/how-to-dirsync-upgrade-get-started/advancedsettings.png)
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Please follow this link to read more about [auto upgrade](how-to-connect-install
> >For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md)
+## 2.0.3.0
+>[!NOTE]
+>This is a major release of Azure AD Connect. Please refer to the Azure Active Directory V2.0 article for more details.
+
+### Release status
+7/20/2021: Released for download only, not available for auto upgrade
+### Functional changes
+ - We have upgraded the LocalDB components of SQL Server to SQL 2019.
+ - This release requires Windows Server 2016 or newer, due to the requirements of SQL Server 2019.
+ - In this release we enforce the use of TLS 1.2. If you have enabled your Windows Server for TLS 1.2, AADConnect will use this protocol. If TLS 1.2 is not enabled on the server you will see an error message when attempting to install AADConnect and the installation will not continue until you have enabled TLS 1.2. Note that you can use the new ΓÇ£Set-ADSyncToolsTls12ΓÇ¥ cmdlets to enable TLS 1.2 on your server.
+ - With this release, you can use a user with the user role ΓÇ£Hybrid Identity AdministratorΓÇ¥ to authenticate when you install Azure AD Connect. You no longer need the Global Administrator role for this.
+ - We have upgraded the Visual C++ runtime library to version 14 as a prerequisite for SQL Server 2019
+ - This release uses the MSAL library for authentication, and we have removed the older ADAL library, which will be retired in 2022.
+ - We no longer apply permissions on the AdminSDHolders, following Windows security guidance. We changed the parameter "SkipAdminSdHolders" to "IncludeAdminSdHolders" in the ADSyncConfig.psm1 module.
+ - Passwords will now be reevaluated when the password last set value is changed, regardless of whether the password itself is changed. If for a user the password is set to ΓÇ£Must change passwordΓÇ¥ then this status is synced to Azure AD, and when the user attempts to sign in in Azure AD they will be prompted to reset their password.
+ - We have added two new cmdlets to the ADSyncTools module to enable or retrieve TLS 1.2 settings from the Windows Server.
+ - Get-ADSyncToolsTls12
+ - Set-ADSyncToolsTls12
+
+You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it as needed. Note that TLS 1.2 must be enabled on the server for the installation or AADConnect to succeed.
+
+ - We have revamped ADSyncTools with several new and improved cmdlets. The [ADSyncTools article](reference-connect-adsynctools.md) has more details about these cmdlets.
+ The following cmdlets have been added or updated
+ - Clear-ADSyncToolsMsDsConsistencyGuid
+ - ConvertFrom-ADSyncToolsAadDistinguishedName
+ - ConvertFrom-ADSyncToolsImmutableID
+ - ConvertTo-ADSyncToolsAadDistinguishedName
+ - ConvertTo-ADSyncToolsCloudAnchor
+ - ConvertTo-ADSyncToolsImmutableID
+ - Export-ADSyncToolsAadDisconnectors
+ - Export-ADSyncToolsObjects
+ - Export-ADSyncToolsRunHistory
+ - Get-ADSyncToolsAadObject
+ - Get-ADSyncToolsMsDsConsistencyGuid
+ - Import-ADSyncToolsObjects
+ - Import-ADSyncToolsRunHistory
+ - Remove-ADSyncToolsAadObject
+ - Search-ADSyncToolsADobject
+ - Set-ADSyncToolsMsDsConsistencyGuid
+ - Trace-ADSyncToolsADImport
+ - Trace-ADSyncToolsLdapQuery
+- We now use the V2 endpoint for import and export and we fixed issue in the Get-ADSyncAADConnectorExportApiVersion cmdlet. You can read more about the V2 endpoint in the [Azure AD Connect sync V2 endpoint article](how-to-connect-sync-endpoint-api-v2.md).
+- We have added the following new user properties to sync from on-prem AD to Azure AD
+ - employeeType
+ - employeeHireDate
+- This release requires PowerShell version 5.0 or newer to be installed on the Windows Server. Note that this version is part of Windows Server 2016 and newer.
+- We increased the Group sync membership limits to 250k with the new V2 endpoint.
+- We have updated the Generic LDAP connector and the Generic SQL Connector to the latest versions. Read more about these connectors here:
+ - [Generic LDAP Connector reference documentation](https://docs.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
+ - [Generic SQL Connector reference documentation](https://docs.microsoft.com/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
+- In the M365 Admin Center, we now report the AADConnect client version whenever there is export activity to Azure AD. This ensures that the M365 Admin Center always has the most up to date AADConnect client version, and that it can detect when youΓÇÖre using and outdated version
+- Provides a batch import execution script which can be called from Windows scheduled job so that the customers can automate the batch import operations with scheduling.
+ - Credentials are provided as an encrypted file using Windows Data Protection API (DPAPI).
+ - Credential files can be use only at the same machine and user account where it's created.
+- The Azure AD Kerberos Feature supported for the MSAL library. To use the AAD Kerberos Feature, the customer needs to register an on-premises service principal name into the Azure AD. Provides importing of an on-premises service principal object into the Azure AD.
+### Bug fixes
+- We fixed an accessibility bug where the screen reader is announcing incorrect role of the 'Learn More' link.
+- We fixed a bug where sync rules with large precedence values (i.e. 387163089) cause upgrade to fail. We updated sproc 'mms_UpdateSyncRulePrecedence' to cast the precedence number as an integer prior to incrementing the value.
+- Fixed a bug where group writeback permissions are not set on the sync account if a group writeback configuration is imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.
+- We updated the Azure AD Connect Health agent version to 3.1.110.0 to fix an installation failure.
+- We are seeing an issue with non-default attributes from exported configurations where directory extension attributes are configured. When importing these configurations to a new server/installation, the attribute inclusion list is overridden by the directory extension configuration step, so after import only default and directory extension attributes are selected in the sync service manager (non-default attributes are not included in the installation, so the user must manually reenable them from the sync service manager if they want their imported sync rules to work). We now refresh the AAD Connector before configuring directory extension to keep existing attributes from the attribute inclusion list.
+- We fixed an accessibility issues where the page header's font weight is set as "Light". Font weight is now set to "Bold" for the page title, which applies to the header of all pages.
+- The function Get-AdObject in ADSyncSingleObjectSync.ps1 has been renamed to Get-AdDirectoryObject to prevent ambiguity with the AD cmdlet.
+- The SQL function 'mms_CheckSynchronizationRuleHasUniquePrecedence' allow duplicates precedence on outbound sync rules on different connectors. We removed the condition that allows duplicate rule precedence.
+- We fixed a bug where the Single Object Sync cmdlet fails if the attribute flow data is null i.e. on exporting delete operation
+- We fixed a bug where the installation fails because the ADSync bootstrap service cannot be started. We now add Sync Service Account to the Local Builtin User Group before starting the bootstrap service.
+- We fixed an accessibility issue where the active tab on AAD Connect wizard is not showing correct color on High Contrast theme. The selected color code was being overwritten due to missing condition in normal color code configuration.
+- We addressed an issue where users were allowed to deselect objects and attributes used in sync rules using the UI and PowerShell. We now show friendly error message if you try to deselect any attribute or object that is used in any sync rules.
+- We made some updates to the ΓÇ£migrate settings codeΓÇ¥ to check and fix backward compatibility issue when the script is ran on an older version of Azure AD Connect.
+- Fixed a bug where, when PHS tries to look up an incomplete object, it does not use the same algorithm to resolve the DC as it used originally to fetch the passwords. In particular, it is ignoring affinitized DC information. The Incomplete object lookup should use the same logic to locate the DC in both instances.
+- We fixed a bug where AADConnect cannot read Application Proxy items using Microsoft Graph due to a permissions issue with calling Microsoft Graph directly based on AAD Connect client id. To fix this, we removed the dependency on Microsoft Graph and instead use AAD PowerShell to work with the App Proxy Application objects.
+- We removed the writeback member limit from 'Out to AD - Group SOAInAAD Exchange' sync rule
+- We fixed a bug where, when changing connector account permissions, if an object comes in scope that has not changed since the last delta import, a delta import will not import it. We now display warning alerting user of the issue.
+- We fixed an accessibility issue where the screen reader is not reading radio button position, i.e. 1 of 2. We added added positional text to the radio button accessibility text field.
+- We updated the Pass-Thru Authentication Agent bundle. The older bundle did not have correct reply URL for HIP's first party application in US Gov.
+- We fixed a bug where there is a ΓÇÿstopped-extension-dll-exceptionΓÇÖ on AAD connector export after clean installing AADConnect version 1.6.X.X, which defaults to using DirSyncWebServices API V2, using an existing database. Previously the setting export version to v2 was only being done for upgrade, we changed so that it is set on clean install as well.
+- The ΓÇ£ADSyncPrep.psm1ΓÇ¥ module is no longer used and is removed from the installation.
+
+### Known issues
+- The AADConnect wizard shows the ΓÇ£Import Synchronization SettingsΓÇ¥ option as ΓÇ£PreviewΓÇ¥, while this feature is generally Available.
+- Some Active Directory connectors may be installed in a different order when using the output of the migrate settings script to install the product.
+- The User Sign In options page in the Azure AD Connect wizard mentions ΓÇ£Company AdministratorΓÇ¥. This term is no longer used and needs to be replace by ΓÇ£Global AdministratorΓÇ¥.
+- The ΓÇ£Export settingsΓÇ¥ option is broken when the Sign In option has been configured to use PingFederate.
+- While Azure AD Connect can now be deployed using the Hybrid Identity Administrator role, configuring Self Service Password Reset will still require user with the Global Administrator role.
+- When importing the AADConnect configuration while deploying to connect with a different tenant than the original AADConnect configuration, directory extension attributes are not configured correctly.
+ ## 1.6.4.0 >[!NOTE]
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
+
+ Title: 'What is Azure AD Connect v2.0? | Microsoft Docs'
+description: Learn about the next version of Azure AD Connect.
++++++ Last updated : 06/24/2021+++++
+# Introduction to Azure AD Connect V2.0
+
+Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. To attempt to update all of these components individually would take time and planning.
+
+To address this, we wanted to bundle as many of these newer components into a new, single release, so you only have to update once. This release will be Azure AD Connect V2.0. This is a new version of the same software used to accomplish your hybrid identity goals that is built using the latest foundational components.
+
+## What are the major changes?
+
+### SQL Server 2019 LocalDB
+
+The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB. V2.0 ships with a SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. SQL Server 2012 will go out of extended support in July 2022. For more information see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
+
+### MSAL authentication library
+
+The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2.0 release ships with the newer MSAL library. For more information see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
+
+### Visual C++ Redist 14
+
+SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This will be installed with the Azure AD Connect V2.0 package, so you do not have to take any action for the C++ runtime update.
+
+### TLS 1.2
+
+TLS1.0 and TLS 1.1 are protocols that are deemed unsafe and are being deprecated by Microsoft. This release of Azure AD Connect will only support TLS 1.2. If your server does not support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2.0. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
+
+### All binaries signed with SHA2
+
+We noticed that some components had SHA1 signed binaries. We no longer support SHA1 for downloadable binaries and we upgraded all binaries to SHA2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and were not tampered with during delivery. Because of weaknesses in the SHA-1 algorithm and to align to industry standards, we have changed the signing of Windows updates to use the more secure SHA-2 algorithm."ΓÇ»
+
+There is no action needed from your side.
+
+### Windows Server 2012 and Windows Server 2012 R2 are no longer supported
+
+SQL Server 2019 requires Windows Server 2016 or newer as a server operating system. Since AAD Connect v2 contains SQL Server 2019 components, we no longer can support older Windows Server versions.
+
+You cannot install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
+
+This [article](https://docs.microsoft.com/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
+
+### PowerShell 5.0
+
+This release of Azure AD Connect contains several cmdlets that require PowerShell 5.0, so this requirement is a new prerequisite for Azure AD Connect.
+
+More details about PowerShell prerequisites can be found [here](https://docs.microsoft.com/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements?view=powershell-7.1#windows-powershell-50).
+
+ >[!NOTE]
+ >PowerShell 5 is already part of Windows Server 2016 so you probably do not have to take action as long as you are on a recent Window Server version.
+
+## What else do I need to know?
++
+**Why is this upgrade important for me?** </br>
+Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
+
+This upgrade is especially important since we have had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
+
+**Is there any new functionality I need to know about?** </br>
+No ΓÇô this release does not contain any new functionality. This release only contains updates of some of the foundational components on Azure AD Connect.
+
+**Can I upgrade from any previous version to V2.0?** </br>
+Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Connect V2.0 is supported. Please follow the guidance in this article to determine what is the best upgrade strategy for you.
+
+**Can I export the configuration of my current server and import it in Azure AD Connect V2.0?** </br>
+Yes, you can do that, and it is a great way to migrate to Azure AD Connect V2.0 ΓÇô especially if you are also upgrading to a new operating system version. You can read more about the Import/export configuration feature and how you can use it in this [article](how-to-connect-import-export-config.md).
+
+**I have enabled auto upgrade for Azure AD Connect ΓÇô will I get this new version automatically?** </br>
+No ΓÇô Azure AD Connect V2.0 will not be made available for auto upgrade at this time.
+
+**I am not ready to upgrade yet ΓÇô how much time do I have?** </br>
+You should upgrade to Azure AD Connect V2.0 as soon as you can. For the time being we will continue to support older versions of Azure AD Connect, but it may prove difficult to provide a good support experience if some of the components in Azure AD Connect have dropped out of support. This upgrade is particularly important for ADAL and TLS1.0/1.1 as these services might stop working unexpectedly after they are deprecated.
+
+**I use an external SQL database and do not use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
+Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation.
+
+**What happens if I do not upgrade?** </br>
+Until one of the components that are being retired are actually deprecated, you will not see any impact. Azure AD Connect will keep on working.
+
+We expect TLS 1.0/1.1 to be deprecated in January 2022, and you need to make sure you are not using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that does not require an update of Azure AD Connect to V2.0
+
+In June 2022, ADAL will go out of support. When ADAL goes out of support authentication may stop working unexpectedly and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2.0 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
+
+**After upgrading to 2.0 the ADSync PowerShell cmdlets do not work?** </br>
+This is a known issue. To resolve this, restart your PowerShell session after installing or upgrading to version 2.0 and then re-import the module. Use the following instructions to import the module.
+
+ 1. Open Windows PowerShell with administrative privileges
+ 2. Type or copy and paste the following:
+ ``` powershell
+ Import-module -Name "C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync"
+ ```
+
+
+## License requirements for using Azure AD Connect V2.0
++
+## License requirements for using Azure AD Connect Health
+
+## Next steps
+
+- [Hardware and prerequisites](how-to-connect-install-prerequisites.md)
+- [Express settings](how-to-connect-install-express.md)
+- [Customized settings](how-to-connect-install-custom.md)
+
+This article describes the upgrade from older Windows Server versions to Windows Server 2019.
active-directory Application Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-types.md
When filtered to **All Applications**, the **All Applications** **List** shows e
- **Application Proxy Applications** ΓÇô An application running in your on-premises environment that you want to provide secure single-sign on to externally - When signing up for, or signing in to, a third-party application integrated with Azure Active Directory. One example is [Smartsheet](https://app.smartsheet.com/b/home) or [DocuSign](https://www.docusign.net/member/MemberLogin.aspx). - Microsoft apps such as Microsoft 365.
+- When you use managed identities for Azure resources. For more information, see [Managed identity types](../managed-identities-azure-resources/overview.md#managed-identity-types).
- When you add a new application registration by creating a custom-developed application using the [Application Registry](../develop/quickstart-register-app.md) - When you add a new application registration by creating a custom-developed application using the [V2.0 Application Registration portal](../develop/quickstart-register-app.md) - When you add an application, youΓÇÖre developing using Visual StudioΓÇÖs [ASP.NET Authentication Methods](https://www.asp.net/visual-studio/overview/2013/creating-web-projects-in-visual-studio#orgauthoptions) or [Connected Services](https://devblogs.microsoft.com/visualstudio/connecting-to-cloud-services/)
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
zone_pivot_groups: identity-mi-methods
# Manage user-assigned managed identities + Managed identities for Azure resources eliminate the need to manage credentials in code. You can use them to get an Azure Active Directory (Azure AD) token your applications can use when you access resources that support Azure AD authentication. Azure manages the identity so you don't have to. There are two types of managed identities: system-assigned and user-assigned. The main difference between them is that system-assigned managed identities have their lifecycle linked to the resource where they're used. User-assigned managed identities can be used on multiple resources. To learn more about managed identities, see [What are managed identities for Azure resources?](overview.md). ::: zone pivot="identity-mi-methods-azp"- In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity by using the Azure portal. ## Prerequisites -- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). *Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)*.
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types).
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. + ## Create a user-assigned managed identity To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
To create a user-assigned managed identity, your account needs the [Managed Iden
- **Region**: Choose a region to deploy the user-assigned managed identity, for example, **West US**. - **Name**: Enter the name for your user-assigned managed identity, for example, UAI1.
- ![Screenshot that shows the Create User Assigned Managed Identity pane.](media/how-to-manage-ua-identity-portal/create-user-assigned-managed-identity-portal.png)
+
+ ![Screenshot that shows the Create User Assigned Managed Identity pane.](media/how-to-manage-ua-identity-portal/create-user-assigned-managed-identity-portal.png)
1. Select **Review + create** to review the changes. 1. Select **Create**.
To assign a role to a user-assigned managed identity, your account needs the [Us
::: zone-end ++ ::: zone pivot="identity-mi-methods-azcli" In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity by using the Azure CLI. ## Prerequisites
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). *Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)*.
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
++ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../../includes/azure-cli-prepare-your-environment-no-header.md)] > [!IMPORTANT]
For information on how to assign a user-assigned managed identity to an Azure VM
::: zone pivot="identity-mi-methods-powershell"
-In this article, you learn how to create, list, and delete a user-assigned managed identity by using PowerShell.
+In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity by using the PowerShell.
## Prerequisites
In this article, you learn how to create, list, and delete a user-assigned manag
- Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks. - Run scripts locally with Azure PowerShell, as described in the next section.
+In this article, you learn how to create, list, and delete a user-assigned managed identity by using PowerShell.
+ ### Configure Azure PowerShell locally To use Azure PowerShell locally for this article instead of using Cloud Shell:
For a full list and more details of the Azure PowerShell managed identities for
In this article, you create a user-assigned managed identity by using Azure Resource Manager.
-You can't list and delete a user-assigned managed identity by using a Resource Manager template. See the following articles to create and list a user-assigned managed identity:
--- [List user-assigned managed identity](how-to-manage-ua-identity-cli.md#list-user-assigned-managed-identities)-- [Delete user-assigned managed identity](how-to-manage-ua-identity-cli.md#delete-a-user-assigned-managed-identity)- ## Prerequisites - If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). *Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)*. - If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
+You can't list and delete a user-assigned managed identity by using a Resource Manager template. See the following articles to create and list a user-assigned managed identity:
+
+- [List user-assigned managed identity](how-to-manage-ua-identity-cli.md#list-user-assigned-managed-identities)
+- [Delete user-assigned managed identity](how-to-manage-ua-identity-cli.md#delete-a-user-assigned-managed-identity)
+ ## Template creation and editing As with the Azure portal and scripting, Resource Manager templates provide the ability to deploy new or modified resources defined by an Azure resource group. Several options are available for template editing and deployment, both local and portal-based. You can:
For information on how to assign a user-assigned managed identity to an Azure VM
::: zone pivot="identity-mi-methods-rest"
-In this article, you learn how to create, list, and delete a user-assigned managed identity by using CURL to make REST API calls.
+In this article, you learn how to create, list, and delete a user-assigned managed identity by using REST.
+ ## Prerequisites
In this article, you learn how to create, list, and delete a user-assigned manag
- To run in the cloud, use [Azure Cloud Shell](../../cloud-shell/overview.md). - To run locally, install [curl](https://curl.haxx.se/download.html) and the [Azure CLI](/cli/azure/install-azure-cli). +
+In this article, you learn how to create, list, and delete a user-assigned managed identity by using CURL to make REST API calls.
+ ## Obtain a bearer access token 1. If you're running locally, sign in to Azure through the Azure CLI.
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
Previously updated : 06/30/2021 Last updated : 07/20/2021 # Tutorial: Integrate Atlassian Cloud with Azure Active Directory
Follow these steps to enable Azure AD SSO in the Azure portal.
![Authentication policies](./media/atlassian-cloud-tutorial/policy.png)
+ > [!NOTE]
+ > The admins can test the SAML configuration by only enabling enforced SSO for a subset of users first on a separate authentication policy, and then enabling the policy for all users if there are no issues.
+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
active-directory Brushup Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/brushup-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Brushup | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Brushup.
++++++++ Last updated : 07/19/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Brushup
+
+In this tutorial, you'll learn how to integrate Brushup with Azure Active Directory (Azure AD). When you integrate Brushup with Azure AD, you can:
+
+* Control in Azure AD who has access to Brushup.
+* Enable your users to be automatically signed-in to Brushup with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Brushup single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Brushup supports **SP and IDP** initiated SSO.
+
+## Add Brushup from the gallery
+
+To configure the integration of Brushup into Azure AD, you need to add Brushup from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Brushup** in the search box.
+1. Select **Brushup** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Brushup
+
+Configure and test Azure AD SSO with Brushup using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Brushup.
+
+To configure and test Azure AD SSO with Brushup, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Brushup SSO](#configure-brushup-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Brushup test user](#create-brushup-test-user)** - to have a counterpart of B.Simon in Brushup that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Brushup** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<COMPANY_CODE>.brushup.net/`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<COMPANY_CODE>.brushup.net/accounts/sso?acs`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<COMPANY_CODE>.brushup.net/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Brushup Client support team](mailto:support@brushup.net) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificate-base64-download.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Brushup.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Brushup**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Brushup SSO
+
+To configure single sign-on on **Brushup** side, you need to send the **Certificate (PEM)** to [Brushup support team](mailto:support@brushup.net). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Brushup test user
+
+In this section, you create a user called Britta Simon in Brushup. Work with [Brushup support team](mailto:support@brushup.net) to add the users in the Brushup platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Brushup Sign on URL where you can initiate the login flow.
+
+* Go to Brushup Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Brushup for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Brushup tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Brushup for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Brushup you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Cisco Spark Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-spark-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://web.ciscospark.com/idb/Consumer/metaAlias/<ID>/sp` > [!NOTE]
- > This value is not real. Copy the lateral Reply URL value and add this value to the [Sign on URL](https://web.ciscospark.com/) to formulate the actual Sign on URL value. You can also refer the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > This value is not real. Copy the lateral Reply URL value and add this value to the `https://web.ciscospark.com/` to formulate the actual Sign on URL value.
1. Cisco Webex application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
active-directory Directprint Io Cloud Print Administration Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/directprint-io-cloud-print-administration-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with directprint.io Cloud Print Administration | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and directprint.io Cloud Print Administration.
++++++++ Last updated : 07/19/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with directprint.io Cloud Print Administration
+
+In this tutorial, you'll learn how to integrate directprint.io Cloud Print Administration with Azure Active Directory (Azure AD). When you integrate directprint.io Cloud Print Administration with Azure AD, you can:
+
+* Control in Azure AD who has access to directprint.io Cloud Print Administration.
+* Enable your users to be automatically signed-in to directprint.io Cloud Print Administration with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* directprint.io Cloud Print Administration single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* directprint.io Cloud Print Administration supports **IDP** initiated SSO.
+
+## Add directprint.io Cloud Print Administration from the gallery
+
+To configure the integration of directprint.io Cloud Print Administration into Azure AD, you need to add directprint.io Cloud Print Administration from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **directprint.io Cloud Print Administration** in the search box.
+1. Select **directprint.io Cloud Print Administration** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for directprint.io Cloud Print Administration
+
+Configure and test Azure AD SSO with directprint.io Cloud Print Administration using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in directprint.io Cloud Print Administration.
+
+To configure and test Azure AD SSO with directprint.io Cloud Print Administration, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure directprint.io Cloud Print Administration SSO](#configure-directprintio-cloud-print-administration-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create directprint.io Cloud Print Administration test user](#create-directprintio-cloud-print-administration-test-user)** - to have a counterpart of B.Simon in directprint.io Cloud Print Administration that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **directprint.io Cloud Print Administration** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section the application is pre-configured in IDP initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up directprint.io Cloud Print Administration** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to directprint.io Cloud Print Administration.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **directprint.io Cloud Print Administration**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure directprint.io Cloud Print Administration SSO
+
+To configure single sign-on on **directprint.io Cloud Print Administration** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [directprint.io Cloud Print Administration support team](mailto:support@directprint.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create directprint.io Cloud Print Administration test user
+
+In this section, you create a user called Britta Simon in directprint.io Cloud Print Administration. Work with [directprint.io Cloud Print Administration support team](mailto:support@directprint.io) to add the users in the directprint.io Cloud Print Administration platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the directprint.io Cloud Print Administration for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the directprint.io Cloud Print Administration tile in the My Apps, you should be automatically signed in to the directprint.io Cloud Print Administration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure directprint.io Cloud Print Administration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Draup Inc Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/draup-inc-tutorial.md
Previously updated : 05/28/2021 Last updated : 07/16/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
- a. In the **Identifier** box, type a URL using one of the following patterns:
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- | Identifier URL |
- ||
- |`https://<SUBDOMAIN>.draup.technology/<INSTANCE_NAME>`|
- |`https://<SUBDOMAIN>.draup.com/<INSTANCE_NAME>`|
- |
-
- b. In the **Reply URL** text box, type a URL using one of the following patterns:
-
- | Reply URL |
- ||
- |`https://<SUBDOMAIN>.draup.technology/<INSTANCE_NAME>`|
- |`https://<SUBDOMAIN>.draup.com/<INSTANCE_NAME>`|
- |
-
- c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
-
- | Sign-on URL |
- ||
- |`https://<SUBDOMAIN>.draup.technology/<INSTANCE_NAME>`|
- |`https://<SUBDOMAIN>.draup.com/<INSTANCE_NAME>`|
- |
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier,Reply URL and Sign-On URL. Contact [Draup, Inc Client support team](mailto:support@draup.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ a. In the **Sign-on URL** text box, type the URL:
+ `https://platform.draup.com/saml2/login/`
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
active-directory Jfrog Artifactory Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jfrog-artifactory-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `<servername>.jfrog.io`
+ a. In the **Identifier** text box, enter a URL that reflects the Artifactory URL.
b. In the **Reply URL** text box, type a URL using the following pattern:
- - For Artifactory 6.x: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
- - For Artifactory 7.x: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
+ - For Artifactory Self-hosted: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
+ - For Artifactory SaaS: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- - For Artifactory 6.x: `https://<servername>.jfrog.io/<servername>/webapp/`
- - For Artifactory 7.x: `https://<servername>.jfrog.io/ui/login`
+ - For Artifactory Self-hosted: `https://<servername>.jfrog.io/<servername>/webapp/`
+ - For Artifactory SaaS: `https://<servername>.jfrog.io/ui/login`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [JFrog Artifactory Client support team](https://support.jfrog.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. JFrog Artifactory application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click the **Edit** icon to open the User Attributes dialog.
+1. JFrog Artifactory application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click the **Edit** icon to open the User Attributes & Claims dialog.
![Screenshot shows User Attributes with the edit control called out.](common/edit-attribute.png)
-1. In addition to the above, JFrog Artifactory expects a number of additional attributes to be passed back in the SAML response. In the **User Attributes & Claims** section on the **Group Claims (Preview)** dialog, perform the following steps:
+1. In addition to the above, JFrog Artifactory expects a number of additional attributes to be passed back in the SAML response. In the **User Attributes & Claims** section click **Add a group claim** and perform the following steps:
- a. Click the **pen** next to **Groups returned in claim**.
+ a. Click **Open** next to **Groups returned in claim**.
![Screenshot shows User Attributes & Claims with the Edit icon selected.](./media/jfrog-artifactory-tutorial/configuration-4.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
6. Configure the Artifactory (SAML Service Provider Name) with the 'Identifier' field (see step 4). In the **Set up JFrog Artifactory** section, copy the appropriate URL(s) based on your requirement.
- - For Artifactory 6.x: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
- - For Artifactory 7.x: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
+ - For Artifactory Self-hosted: `https://<servername>.jfrog.io/artifactory/webapp/saml/loginResponse`
+ - For Artifactory SaaS: `https://<servername>.jfrog.io/<servername>/webapp/saml/loginResponse`
![Copy configuration URLs](common/copy-configuration-urls.png)
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure JFrog Artifactory you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure JFrog Artifactory you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Kronos Workforce Dimensions Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kronos-workforce-dimensions-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Kronos Workforce Dimensions | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Kronos Workforce Dimensions.
++++++++ Last updated : 07/19/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Kronos Workforce Dimensions
+
+In this tutorial, you'll learn how to integrate Kronos Workforce Dimensions with Azure Active Directory (Azure AD). When you integrate Kronos Workforce Dimensions with Azure AD, you can:
+
+* Control in Azure AD who has access to Kronos Workforce Dimensions.
+* Enable your users to be automatically signed-in to Kronos Workforce Dimensions with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Kronos Workforce Dimensions single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Kronos Workforce Dimensions supports **SP** initiated SSO.
+
+## Add Kronos Workforce Dimensions from the gallery
+
+To configure the integration of Kronos Workforce Dimensions into Azure AD, you need to add Kronos Workforce Dimensions from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Kronos Workforce Dimensions** in the search box.
+1. Select **Kronos Workforce Dimensions** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Kronos Workforce Dimensions
+
+Configure and test Azure AD SSO with Kronos Workforce Dimensions using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Kronos Workforce Dimensions.
+
+To configure and test Azure AD SSO with Kronos Workforce Dimensions, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Kronos Workforce Dimensions SSO](#configure-kronos-workforce-dimensions-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Kronos Workforce Dimensions test user](#create-kronos-workforce-dimensions-test-user)** - to have a counterpart of B.Simon in Kronos Workforce Dimensions that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Kronos Workforce Dimensions** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>.mykronos.com/authn/<TENANT_ID/hsp/<TENANT_NUMBER>`
+
+ b. In the **Sign on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |--|
+ | `https://<CUSTOMER>-<ENVIRONMENT>-sso.<ENVIRONMENT>.mykronos.com/` |
+ | `https://<CUSTOMER>-sso.<ENVIRONMENT>.mykronos.com/` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Kronos Workforce Dimensions Client support team](mailto:support@kronos.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Kronos Workforce Dimensions.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Kronos Workforce Dimensions**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Kronos Workforce Dimensions SSO
+
+To configure single sign-on on **Kronos Workforce Dimensions** side, you need to send the **App Federation Metadata Url** to [Kronos Workforce Dimensions support team](mailto:support@kronos.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Kronos Workforce Dimensions test user
+
+In this section, you create a user called Britta Simon in Kronos Workforce Dimensions. Work with [Kronos Workforce Dimensions support team](mailto:support@kronos.com) to add the users in the Kronos Workforce Dimensions platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Kronos Workforce Dimensions Sign-on URL where you can initiate the login flow.
+
+* Go to Kronos Workforce Dimensions Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Kronos Workforce Dimensions tile in the My Apps, this will redirect to Kronos Workforce Dimensions Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Kronos Workforce Dimensions you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Proprofs Classroom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/proprofs-classroom-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ProProfs Classroom | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and ProProfs Classroom.
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ProProfs Training Maker | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and ProProfs Training Maker.
Previously updated : 06/17/2021 Last updated : 07/16/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with ProProfs Classroom
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with ProProfs Training Maker
-In this tutorial, you'll learn how to integrate ProProfs Classroom with Azure Active Directory (Azure AD). When you integrate ProProfs Classroom with Azure AD, you can:
+In this tutorial, you'll learn how to integrate ProProfs Training Maker with Azure Active Directory (Azure AD). When you integrate ProProfs Training Maker with Azure AD, you can:
-* Control in Azure AD who has access to ProProfs Classroom.
-* Enable your users to be automatically signed-in to ProProfs Classroom with their Azure AD accounts.
+* Control in Azure AD who has access to ProProfs Training Maker.
+* Enable your users to be automatically signed-in to ProProfs Training Maker with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate ProProfs Classroom with Azure Ac
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* ProProfs Classroom single sign-on (SSO) enabled subscription.
+* ProProfs Training Maker single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* ProProfs Classroom supports **IDP** initiated SSO.
+* ProProfs Training Maker supports **IDP** initiated SSO.
-## Add ProProfs Classroom from the gallery
+## Add ProProfs Training Maker from the gallery
-To configure the integration of ProProfs Classroom into Azure AD, you need to add ProProfs Classroom from the gallery to your list of managed SaaS apps.
+To configure the integration of ProProfs Training Maker into Azure AD, you need to add ProProfs Training Maker from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **ProProfs Classroom** in the search box.
-1. Select **ProProfs Classroom** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **ProProfs Training Maker** in the search box.
+1. Select **ProProfs Training Maker** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for ProProfs Classroom
+## Configure and test Azure AD SSO for ProProfs Training Maker
-Configure and test Azure AD SSO with ProProfs Classroom using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ProProfs Classroom.
+Configure and test Azure AD SSO with ProProfs Training Maker using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ProProfs Training Maker.
-To configure and test Azure AD SSO with ProProfs Classroom, perform the following steps:
+To configure and test Azure AD SSO with ProProfs Training Maker, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure ProProfs Classroom SSO](#configure-proprofs-classroom-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create ProProfs Classroom test user](#create-proprofs-classroom-test-user)** - to have a counterpart of B.Simon in ProProfs Classroom that is linked to the Azure AD representation of user.
+1. **[Configure ProProfs Training Maker SSO](#configure-proprofs-training-maker-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ProProfs Training Maker test user](#create-proprofs-training-maker-test-user)** - to have a counterpart of B.Simon in ProProfs Training Maker that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **ProProfs Classroom** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ProProfs Training Maker** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up ProProfs Classroom** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up ProProfs Training Maker** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ProProfs Classroom.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ProProfs Training Maker.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **ProProfs Classroom**.
+1. In the applications list, select **ProProfs Training Maker**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure ProProfs Classroom SSO
+## Configure ProProfs Training Maker SSO
-To configure single sign-on on **ProProfs Classroom** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [ProProfs Classroom support team](mailto:support@proprofs.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **ProProfs Training Maker** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [ProProfs Training Maker support team](mailto:support@proprofs.com). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create ProProfs Classroom test user
+### Create ProProfs Training Maker test user
-In this section, you create a user called Britta Simon in ProProfs Classroom. Work with [ProProfs Classroom support team](mailto:support@proprofs.com) to add the users in the ProProfs Classroom platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in ProProfs Training Maker. Work with [ProProfs Training Maker support team](mailto:support@proprofs.com) to add the users in the ProProfs Training Maker platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the ProProfs Classroom for which you set up the SSO.
+* Click on Test this application in Azure portal and you should be automatically signed in to the ProProfs Training Maker for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the ProProfs Classroom tile in the My Apps, you should be automatically signed in to the ProProfs Classroom for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the ProProfs Training Maker tile in the My Apps, you should be automatically signed in to the ProProfs Training Maker for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure ProProfs Classroom you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure ProProfs Training Maker you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Shiftwizard Saml Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/shiftwizard-saml-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ShiftWizard SAML | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and ShiftWizard SAML.
++++++++ Last updated : 07/19/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with ShiftWizard SAML
+
+In this tutorial, you'll learn how to integrate ShiftWizard SAML with Azure Active Directory (Azure AD). When you integrate ShiftWizard SAML with Azure AD, you can:
+
+* Control in Azure AD who has access to ShiftWizard SAML.
+* Enable your users to be automatically signed-in to ShiftWizard SAML with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ShiftWizard SAML single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* ShiftWizard SAML supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add ShiftWizard SAML from the gallery
+
+To configure the integration of ShiftWizard SAML into Azure AD, you need to add ShiftWizard SAML from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ShiftWizard SAML** in the search box.
+1. Select **ShiftWizard SAML** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for ShiftWizard SAML
+
+Configure and test Azure AD SSO with ShiftWizard SAML using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ShiftWizard SAML.
+
+To configure and test Azure AD SSO with ShiftWizard SAML, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ShiftWizard SAML SSO](#configure-shiftwizard-saml-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ShiftWizard SAML test user](#create-shiftwizard-saml-test-user)** - to have a counterpart of B.Simon in ShiftWizard SAML that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **ShiftWizard SAML** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://azureadsso.myshiftwizard.com/SSOActiveDirectory`
+
+1. ShiftWizard SAML application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, ShiftWizard SAML application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | employeeID | user.employeeid |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificate-base64-download.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ShiftWizard SAML.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ShiftWizard SAML**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ShiftWizard SAML SSO
+
+To configure single sign-on on **ShiftWizard SAML** side, you need to send the **Certificate (PEM)** to [ShiftWizard SAML support team](mailto:it@shiftwizard.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ShiftWizard SAML test user
+
+In this section, you create a user called Britta Simon in ShiftWizard SAML. Work with [ShiftWizard SAML support team](mailto:it@shiftwizard.com) to add the users in the ShiftWizard SAML platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to ShiftWizard SAML Sign-on URL where you can initiate the login flow.
+
+* Go to ShiftWizard SAML Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the ShiftWizard SAML tile in the My Apps, this will redirect to ShiftWizard SAML Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure ShiftWizard SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Splunkenterpriseandsplunkcloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/splunkenterpriseandsplunkcloud-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Splunk Enterprise and Splunk Cloud | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Splunk Enterprise and Splunk Cloud.
+ Title: 'Tutorial: Azure Active Directory integration with Azure AD SSO for Splunk Enterprise and Splunk Cloud | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Azure AD SSO for Splunk Enterprise and Splunk Cloud.
Previously updated : 02/02/2021 Last updated : 07/16/2021
-# Tutorial: Azure Active Directory integration with Splunk Enterprise and Splunk Cloud
+# Tutorial: Azure Active Directory integration with Azure AD SSO for Splunk Enterprise and Splunk Cloud
-In this tutorial, you'll learn how to integrate Splunk Enterprise and Splunk Cloud with Azure Active Directory (Azure AD). When you integrate Splunk Enterprise and Splunk Cloud with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Azure AD SSO for Splunk Enterprise and Splunk Cloud with Azure Active Directory (Azure AD). When you integrate Azure AD SSO for Splunk Enterprise and Splunk Cloud with Azure AD, you can:
-* Control in Azure AD who has access to Splunk Enterprise and Splunk Cloud.
-* Enable your users to be automatically signed in to Splunk Enterprise and Splunk Cloud with their Azure AD accounts.
+* Control in Azure AD who has access to Azure AD SSO for Splunk Enterprise and Splunk Cloud.
+* Enable your users to be automatically signed in to Azure AD SSO for Splunk Enterprise and Splunk Cloud with their Azure AD accounts.
* Manage your accounts in one central location: the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Splunk Enterprise and Splunk Clo
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Splunk Enterprise and Splunk Cloud single sign-on (SSO) enabled subscription.
+* Azure AD SSO for Splunk Enterprise and Splunk Cloud single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Splunk Enterprise and Splunk Cloud supports **SP** initiated SSO
+* Azure AD SSO for Splunk Enterprise and Splunk Cloud supports **SP** initiated SSO.
-## Add Splunk Enterprise and Splunk Cloud from the gallery
+## Add Azure AD SSO for Splunk Enterprise and Splunk Cloud from the gallery
-To configure the integration of Splunk Enterprise and Splunk Cloud into Azure AD, you need to add Splunk Enterprise and Splunk Cloud from the gallery to your list of managed SaaS apps.
+To configure the integration of Azure AD SSO for Splunk Enterprise and Splunk Cloud into Azure AD, you need to add Azure AD SSO for Splunk Enterprise and Splunk Cloud from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Splunk Enterprise and Splunk Cloud** in the search box.
-1. Select **Splunk Enterprise and Splunk Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Azure AD SSO for Splunk Enterprise and Splunk Cloud** in the search box.
+1. Select **Azure AD SSO for Splunk Enterprise and Splunk Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Splunk Enterprise and Splunk Cloud
+## Configure and test Azure AD SSO for Azure AD SSO for Splunk Enterprise and Splunk Cloud
-Configure and test Azure AD SSO with Splunk Enterprise and Splunk Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Splunk Enterprise and Splunk Cloud.
+Configure and test Azure AD SSO with Azure AD SSO for Splunk Enterprise and Splunk Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Azure AD SSO for Splunk Enterprise and Splunk Cloud.
-To configure and test Azure AD SSO with Splunk Enterprise and Splunk Cloud, perform the following steps:
+To configure and test Azure AD SSO with Azure AD SSO for Splunk Enterprise and Splunk Cloud, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Splunk Enterprise and Splunk Cloud SSO](#configure-splunk-enterprise-and-splunk-cloud-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Splunk Enterprise and Splunk Cloud test user](#create-splunk-enterprise-and-splunk-cloud-test-user)** - to have a counterpart of B.Simon in Splunk Enterprise and Splunk Cloud that is linked to the Azure AD representation of user.
+1. **[Configure Azure AD SSO for Splunk Enterprise and Splunk Cloud SSO](#configure-azure-ad-sso-for-splunk-enterprise-and-splunk-cloud-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Azure AD SSO for Splunk Enterprise and Splunk Cloud test user](#create-azure-ad-sso-for-splunk-enterprise-and-splunk-cloud-test-user)** - to have a counterpart of B.Simon in Azure AD SSO for Splunk Enterprise and Splunk Cloud that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Splunk Enterprise and Splunk Cloud** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Azure AD SSO for Splunk Enterprise and Splunk Cloud** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, perform the following pattern:
+
+4. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<splunkserverUrl>/app/launcher/home`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<splunkserver>/saml/acs` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [Splunk Enterprise and Splunk Cloud Client support team](https://www.splunk.com/en_us/about-splunk/contact-us.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [Azure AD SSO for Splunk Enterprise and Splunk Cloud Client support team](https://www.splunk.com/en_us/about-splunk/contact-us.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Splunk Enterprise and Splunk Cloud.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Azure AD SSO for Splunk Enterprise and Splunk Cloud.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Splunk Enterprise and Splunk Cloud**.
+1. In the applications list, select **Azure AD SSO for Splunk Enterprise and Splunk Cloud**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Splunk Enterprise and Splunk Cloud SSO
+## Configure Azure AD SSO for Splunk Enterprise and Splunk Cloud SSO
- To configure single sign-on on **Splunk Enterprise and Splunk Cloud** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Splunk Enterprise and Splunk Cloud support team](https://www.splunk.com/en_us/about-splunk/contact-us.html). They set this setting to have the SAML SSO connection set properly on both sides.
+ To configure single sign-on on **Azure AD SSO for Splunk Enterprise and Splunk Cloud** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Azure AD SSO for Splunk Enterprise and Splunk Cloud support team](https://www.splunk.com/en_us/about-splunk/contact-us.html). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Splunk Enterprise and Splunk Cloud test user
+### Create Azure AD SSO for Splunk Enterprise and Splunk Cloud test user
-In this section, you create a user called Britta Simon in Splunk Enterprise and Splunk Cloud. Work with [Splunk Enterprise and Splunk Cloud support team](https://www.splunk.com/en_us/about-splunk/contact-us.html) to add the users in the Splunk Enterprise and Splunk Cloud platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Azure AD SSO for Splunk Enterprise and Splunk Cloud. Work with [Azure AD SSO for Splunk Enterprise and Splunk Cloud support team](https://www.splunk.com/en_us/about-splunk/contact-us.html) to add the users in the Azure AD SSO for Splunk Enterprise and Splunk Cloud platform. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Splunk Enterprise and Splunk Cloud Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Azure AD SSO for Splunk Enterprise and Splunk Cloud Sign-on URL where you can initiate the login flow.
-* Go to Splunk Enterprise and Splunk Cloud Sign-on URL directly and initiate the login flow from there.
+* Go to Azure AD SSO for Splunk Enterprise and Splunk Cloud Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Splunk Enterprise and Splunk Cloud tile in the My Apps, this will redirect to Splunk Enterprise and Splunk Cloud Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Azure AD SSO for Splunk Enterprise and Splunk Cloud tile in the My Apps, this will redirect to Azure AD SSO for Splunk Enterprise and Splunk Cloud Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Splunk Enterprise and Splunk Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app)
+Once you configure Azure AD SSO for Splunk Enterprise and Splunk Cloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app)
active-directory Talentech Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/talentech-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Talentech for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Talentech.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 0a83529b-b150-4af8-bc5b-a0f4345c3356
+++
+ na
+ms.devlang: na
+ Last updated : 07/14/2021+++
+# Tutorial: Configure Talentech for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Talentech and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Talentech](https://www.talentech.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Talentech
+> * Remove users in Talentech when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Talentech
+> * Provision groups and group memberships in Talentech
+> * Single sign-on to Talentech (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Talentech.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Talentech](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Talentech to support provisioning with Azure AD
+
+1. Log in [Talentech](https://www.talentech.com).
+
+2. Navigate to **Integrations** in the left panel and click on **Add new integration**.
+
+ ![Navigate](media/talentech-provisioning-tutorial/integrations.png)
+
+3. Enter a **Name** for the integration and click **Add**.
+
+4. Navigate to the integration you created and click on **Create api-access token**.
+
+ ![api](media/talentech-provisioning-tutorial/token.png)
+
+5. An access token is generated. This value will be entered in the **Secret Token** field in the Provisioning tab of your Talentech application in the Azure portal.
+
+ ![permanent](media/talentech-provisioning-tutorial/bearer.png)
+
+6. Reach out to Talentech support to generate a Tenant URL. This value will be entered in the **Tenant URL** field in the Provisioning tab of your Talentech application in the Azure portal.
+
+## Step 3. Add Talentech from the Azure AD application gallery
+
+Add Talentech from the Azure AD application gallery to start managing provisioning to Talentech. If you have previously setup Talentech for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Talentech, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add extra roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Talentech
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in TestApp based on user and group assignments in Azure AD.
+
+### To configure automatic user provisioning for Talentech in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Talentech**.
+
+ ![The Talentech link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Talentech Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Talentech. If the connection fails, ensure your Talentech account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Talentech**.
+
+9. Review the user attributes that are synchronized from Azure AD to Talentech in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Talentech for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Talentech API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
+ |externalId|String|
+ |active|Boolean|
+ |name.givenName|String|
+ |name.familyName|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Talentech**.
+
+11. Review the group attributes that are synchronized from Azure AD to Talentech in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Talentech for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Talentech, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and groups that you would like to provision to Talentech by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Taskize Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/taskize-connect-tutorial.md
Previously updated : 12/16/2019 Last updated : 07/16/2021
In this tutorial, you'll learn how to integrate Taskize Connect with Azure Activ
* Enable your users to be automatically signed-in to Taskize Connect with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. --
-* Taskize Connect supports **SP and IDP** initiated SSO
+* Taskize Connect supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Taskize Connect into Azure AD, you need to add Taskize Connect from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Taskize Connect** in the search box. 1. Select **Taskize Connect** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Taskize Connect
+## Configure and test Azure AD SSO for Taskize Connect
Configure and test Azure AD SSO with Taskize Connect using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Taskize Connect.
-To configure and test Azure AD SSO with Taskize Connect, complete the following building blocks:
+To configure and test Azure AD SSO with Taskize Connect, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Taskize Connect, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Taskize Connect** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Taskize Connect** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+1. On the **Basic SAML Configuration** section to configure the application in **IDP** initiated mode, perform the following step:
+
+ a. In the **Reply URL** textbox, type one of the following URLs:
+
+ | **Reply URL** |
+ | -- |
+ |`https://connect.taskize.com/Shibboleth.sso/SAML2/POST`|
+ |`https://help.taskize.com/Shibboleth.sso/SAML2/POST`|
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://connect.taskize.com/connect/` 1. Taskize Connect application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Taskize Connect**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
The objective of this section is to create a user called B.Simon in Taskize Conn
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Taskize Connect Sign on URL where you can initiate the login flow.
-When you click the Taskize Connect tile in the Access Panel, you should be automatically signed in to the Taskize Connect for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Taskize Connect Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Taskize Connect for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Taskize Connect tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Taskize Connect for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Taskize Connect with Azure AD](https://aad.portal.azure.com/)
+Once you configure Taskize Connect you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Thrive Lxp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/thrive-lxp-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Thrive LXP for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Thrive LXP.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 1b4993b3-7fb1-4128-a399-3bad8e26559f
+++
+ na
+ms.devlang: na
+ Last updated : 07/14/2021+++
+# Tutorial: Configure Thrive LXP for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Thrive LXP and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Thrive LXP](https://thrivelearning.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Thrive LXP
+> * Remove users in Thrive LXP when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Thrive LXP
+> * Provision groups and group memberships in Thrive LXP
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/thrive-lxp-tutorial) to Thrive LXP (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A **SCIM token** supplied by your contact at THRIVE LXP.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Thrive LXP](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Thrive LXP to support provisioning with Azure AD
+
+Reach out to your Thrive LXP contact to generate your **Tenant url** and **Secret Token**. These values will be entered in the Tenant url and Secret Token field in the Provisioning tab of your Thrive LXP application in the Azure portal.
+
+## Step 3. Add Thrive LXP from the Azure AD application gallery
+
+Add Thrive LXP from the Azure AD application gallery to start managing provisioning to Thrive LXP. If you have previously setup Thrive LXP for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Thrive LXP, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Thrive LXP
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Thrive LXP in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Thrive LXP**.
+
+ ![The Thrive LXP link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Thrive LXP Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Thrive LXP. If the connection fails, ensure your Thrive LXP account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Thrive LXP**.
+
+9. Review the user attributes that are synchronized from Azure AD to Thrive LXP in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Thrive LXP for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Thrive LXP API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |userName|String|&check;|
+ |title|String|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |timezone|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Thrive LXP**.
+
+11. Review the group attributes that are synchronized from Azure AD to Thrive LXP in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Thrive LXP for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Thrive LXP, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Thrive LXP by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-about.md
These are some of the scenarios that can be enabled for your workloads when you
- **Traffic management and manipulation** - Create a policy on a service that will rate limit all traffic to a version of a service from a specific origin. Or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency. -- **Observability** - Gain insight into how your services are connected the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, and ingress/egress. Add distributed tracing abilities to your applications.
+- **Observability** - Gain insight into how your services are connected by analyzing the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, and ingress/egress. Add distributed tracing abilities to your applications.
## Architecture
Before you select a service mesh, ensure that you understand your requirements a
- **Is an Ingress Controller sufficient for my needs?** - Sometimes having a capability like a/b testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside. -- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like cpu and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
+- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like CPU and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
- **Is this adding additional complexity unnecessarily?** - If the reason for installing a service mesh is to gain a capability that is not necessarily critical to the business or operational teams, then consider whether the additional complexity of installation, maintenance, and configuration is worth it.
You may also want to explore Service Mesh Interface (SMI), a standard interface
<!-- LINKS - internal --> [istio-about]: ./servicemesh-istio-about.md [linkerd-about]: ./servicemesh-linkerd-about.md
-[consul-about]: ./servicemesh-consul-about.md
+[consul-about]: ./servicemesh-consul-about.md
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-advanced-policies.md
Title: Azure API Management advanced policies | Microsoft Docs description: Learn about the advanced policies available for use in Azure API Management. See examples and view additional available resources.- ---- Previously updated : 11/13/2020 Last updated : 07/19/2021+
This topic provides a reference for the following API Management policies. For i
- [Forward request](#ForwardRequest) - Forwards the request to the backend service. - [Limit concurrency](#LimitConcurrency) - Prevents enclosed policies from executing by more than the specified number of requests at a time. - [Log to Event Hub](#log-to-eventhub) - Sends messages in the specified format to an Event Hub defined by a Logger entity.
+- [Emit metrics](#emit-metrics) - Sends custom metrics to Application Insights at execution.
- [Mock response](#mock-response) - Aborts pipeline execution and returns a mocked response directly to the caller. - [Retry](#Retry) - Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count. - [Return response](#ReturnResponse) - Aborts pipeline execution and returns the specified response directly to the caller.
This policy can be used in the following policy [sections](./api-management-howt
- **Policy scopes:** all scopes
+## Emit metrics
+
+The `emit-metric` policy sends custom metrics in the specified format to Application Insights.
+
+> [!NOTE]
+> * Custom metrics are a [preview feature](../azure-monitor/essentials/metrics-custom-overview.md) of Azure Monitor and subject to [limitations](../azure-monitor/essentials/metrics-custom-overview.md#design-limitations-and-considerations).
+> * For more information about the API Management data added to Application Insights, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#what-data-is-added-to-application-insights).
+
+### Policy statement
+
+```xml
+<emit-metric name="name of custom metric" value="value of custom metric" namespace="metric namespace">
+ <dimension name="dimension name" value="dimension value" />
+</emit-metric>
+```
+
+### Example
+
+The following example sends a custom metric to count the number of API requests along with user ID, client IP, and API ID as custom dimensions.
+
+```xml
+<policies>
+ <inbound>
+ <emit-metric name="Request" value="1" namespace="my-metrics">
+ <dimension name="User ID" />
+ <dimension name="Client IP" value="@(context.Request.IpAddress)" />
+ <dimension name="API ID" />
+ </emit-metric>
+ </inbound>
+ <outbound>
+ </outbound>
+</policies>
+```
+
+### Elements
+
+| Element | Description | Required |
+| -- | | -- |
+| emit-metric | Root element. The value of this element is the string to emit your custom metric. | Yes |
+| dimension | Sub element. Add one or more of these elements for each dimension included in the custom metric. | Yes |
+
+### Attributes
+
+#### emit-metric
+| Attribute | Description | Required | Type | Default value |
+| | -- | -- | | -- |
+| name | Name of custom metric. | Yes | string, expression | N/A |
+| namespace | Namespace of custom metric. | No | string, expression | API Management |
+| value | Value of custom metric. | No | int, expression | 1 |
+
+#### dimension
+| Attribute | Description | Required | Type | Default value |
+| | -- | -- | | -- |
+| name | Name of dimension. | Yes | string, expression | N/A |
+| value | Value of dimension. Can only be omitted if `name` matches one of the default dimensions. If so, value is provided as per dimension name. | No | string, expression | N/A |
+
+**Default dimension names that may be used without value:**
+
+* API ID
+* Operation ID
+* Product ID
+* User ID
+* Subscription ID
+* Location ID
+* Gateway ID
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound, outbound, backend, on-error
+
+- **Policy scopes:** all scopes
+ ## <a name="mock-response"></a> Mock response The `mock-response`, as the name implies, is used to mock APIs and operations. It aborts normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, whenever available. It generates sample responses from schemas, when schemas are provided and examples are not. If neither examples or schemas are found, responses with no content are returned.
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-app-insights.md
na Previously updated : 02/25/2021 Last updated : 07/19/2021
Before you can use Application Insights, you first need to create an instance of
Application Insights receives:
-+ *Request* telemetry item, for every incoming request (*frontend request*, *frontend response*),
-+ *Dependency* telemetry item, for every request forwarded to a backend service (*backend request*, *backend response*),
++ *Request* telemetry item, for every incoming request:
+ + *frontend request*, *frontend response*
++ *Dependency* telemetry item, for every request forwarded to a backend service:
+ + *backend request*, *backend response*
+ *Exception* telemetry item, for every failed request: + failed because of a closed client connection + triggered an *on-error* section of the API policies
- + has a response HTTP status code matching 4xx or 5xx.
-+ *Trace* telemetry item, if you configure a [trace](api-management-advanced-policies.md#Trace) policy. The `severity` setting in the `trace` policy must be equal to or greater than the `verbosity` setting in the Application Insights logging.
+ + has a response HTTP status code matching 4xx or 5xx
++ *Trace* telemetry item, if you configure a [trace](api-management-advanced-policies.md#Trace) policy.
+ + The `severity` setting in the `trace` policy must be equal to or greater than the `verbosity` setting in the Application Insights logging.
+
+You can also emit custom metrics by configuring the [`emit-metric`](api-management-advanced-policies.md#emit-metrics) policy.
> [!NOTE] > See [Application Insights limits](../azure-monitor/service-limits.md#application-insights) for information about the maximum size and number of metrics and events per Application Insights instance.
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-policies.md
description: Learn about the policies available for use in Azure API Management.
documentationcenter: '' - - Previously updated : 02/17/2021 Last updated : 07/19/2021 # API Management policies
This section provides a reference for the following API Management policies. For
- [Forward request](api-management-advanced-policies.md#ForwardRequest) - Forwards the request to the backend service. - [Limit concurrency](api-management-advanced-policies.md#LimitConcurrency) - Prevents enclosed policies from executing by more than the specified number of requests at a time. - [Log to Event Hub](api-management-advanced-policies.md#log-to-eventhub) - Sends messages in the specified format to a message target defined by a Logger entity.
+ - [Emit metrics](api-management-advanced-policies.md#emit-metrics) - Sends custom metrics to Application Insights at execution.
- [Mock response](api-management-advanced-policies.md#mock-response) - Aborts pipeline execution and returns a mocked response directly to the caller. - [Retry](api-management-advanced-policies.md#Retry) - Retries execution of the enclosed policy statements, if and until the condition is met. Execution will repeat at the specified time intervals and up to the specified retry count. - [Return response](api-management-advanced-policies.md#ReturnResponse) - Aborts pipeline execution and returns the specified response directly to the caller.
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-policy-expressions.md
description: Learn about policy expressions in Azure API Management. See example
documentationcenter: '' - - Previously updated : 03/22/2019 Last updated : 07/07/2021 # API Management policy expressions
For more information:
- See how to supply context information to your backend service. Use the [Set query string parameter](api-management-transformation-policies.md#SetQueryStringParameter) and [Set HTTP header](api-management-transformation-policies.md#SetHTTPheader) policies to supply this information. - See how to use the [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to pre-authorize access to operations based on token claims.-- See how to use an [API Inspector](./api-management-howto-api-inspector.md) trace to see how policies are evaluated and the results of those evaluations.
+- See how to use an [API Inspector](./api-management-howto-api-inspector.md) trace to detect how policies are evaluated and the results of those evaluations.
- See how to use expressions with the [Get from cache](api-management-caching-policies.md#GetFromCache) and [Store to cache](api-management-caching-policies.md#StoreToCache) policies to configure API Management response caching. Set a duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. - See how to perform content filtering. Remove data elements from the response received from the backend using the [Control flow](api-management-advanced-policies.md#choose) and [Set body](api-management-transformation-policies.md#SetBody) policies. - To download the policy statements, see the [api-management-samples/policies](https://github.com/Azure/api-management-samples/tree/master/policies) GitHub repo.
The following table lists the .NET Framework types and their members that are al
|System.Linq.Enumerable|All| |System.Math|All| |System.MidpointRounding|All|
+|System.Net.IPAddress|All|
|System.Net.WebUtility|All| |System.Nullable|All| |System.Random|All|
A variable named `context` is implicitly available in every policy [expression](
|-|-| |context|[Api](#ref-context-api): [IApi](#ref-iapi)<br /><br /> [Deployment](#ref-context-deployment)<br /><br /> Elapsed: TimeSpan - time interval between the value of Timestamp and current time<br /><br /> [LastError](#ref-context-lasterror)<br /><br /> [Operation](#ref-context-operation)<br /><br /> [Product](#ref-context-product)<br /><br /> [Request](#ref-context-request)<br /><br /> RequestId: Guid - unique request identifier<br /><br /> [Response](#ref-context-response)<br /><br /> [Subscription](#ref-context-subscription)<br /><br /> Timestamp: DateTime - point in time when request was received<br /><br /> Tracing: bool - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [Variables](#ref-context-variables): IReadOnlyDictionary<string, object><br /><br /> void Trace(message: string)| |<a id="ref-context-api"></a>context.Api|Id: string<br /><br /> IsCurrentRevision: bool<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Revision: string<br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> Version: string |
-|<a id="ref-context-deployment"></a>context.Deployment|Region: string<br /><br /> ServiceName: string<br /><br /> Certificates: IReadOnlyDictionary<string, X509Certificate2>|
+|<a id="ref-context-deployment"></a>context.Deployment|GatewayId: string (returns 'managed' for managed gateways)<br /><br /> Region: string<br /><br /> ServiceName: string<br /><br /> Certificates: IReadOnlyDictionary<string, X509Certificate2>|
|<a id="ref-context-lasterror"></a>context.LastError|Source: string<br /><br /> Reason: string<br /><br /> Message: string<br /><br /> Scope: string<br /><br /> Section: string<br /><br /> Path: string<br /><br /> PolicyId: string<br /><br /> For more information about context.LastError, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>context.Operation|Id: string<br /><br /> Method: string<br /><br /> Name: string<br /><br /> UrlTemplate: string| |<a id="ref-context-product"></a>context.Product|Apis: IEnumerable<[IApi](#ref-iapi)\><br /><br /> ApprovalRequired: bool<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Name: string<br /><br /> State: enum ProductState {NotPublished, Published}<br /><br /> SubscriptionLimit: int?<br /><br /> SubscriptionRequired: bool|
A variable named `context` is implicitly available in every policy [expression](
|<a id="ref-context-user"></a>context.User|Email: string<br /><br /> FirstName: string<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Identities: IEnumerable<[IUserIdentity](#ref-iuseridentity)\><br /><br /> LastName: string<br /><br /> Note: string<br /><br /> RegistrationDate: DateTime| |<a id="ref-iapi"></a>IApi|Id: string<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Protocols: IEnumerable<string\><br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> SubscriptionKeyParameterNames: [ISubscriptionKeyParameterNames](#ref-isubscriptionkeyparameternames)| |<a id="ref-igroup"></a>IGroup|Id: string<br /><br /> Name: string|
-|<a id="ref-imessagebody"></a>IMessageBody|As<T\>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read a request and response message bodies in a specified type `T`. By default the method uses the original message body stream and renders it unavailable after it returns. To avoid that by having the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`. Go [here](api-management-transformation-policies.md#SetBody) to see an example.|
+|<a id="ref-imessagebody"></a>IMessageBody|As<T\>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read a request and response message bodies in a specified type `T`. By default the method uses the original message body stream and renders it unavailable after it returns. To avoid that by having the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
|<a id="ref-iurl"></a>IUrl|Host: string<br /><br /> Path: string<br /><br /> Port: int<br /><br /> [Query](#ref-iurl-query): IReadOnlyDictionary<string, string[]><br /><br /> QueryString: string<br /><br /> Scheme: string| |<a id="ref-iuseridentity"></a>IUserIdentity|Id: string<br /><br /> Provider: string| |<a id="ref-isubscriptionkeyparameternames"></a>ISubscriptionKeyParameterNames|Header: string<br /><br /> Query: string|
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/validation-policies.md
Previously updated : 03/12/2021 Last updated : 07/12/2021
Available actions:
| Action | Description | | | | | ignore | Skip validation. |
-| prevent | Block the request or response processing, log the verbose validation error, and return an error. Processing is interrupted when the first set of errors is detected. |
-| detect | Log validation errors, without interrupting request or response processing. |
+| prevent | Block the request or response processing, log the verbose [validation error](#validation-errors), and return an error. Processing is interrupted when the first set of errors is detected.
+| detect | Log [validation errors](#validation-errors), without interrupting request or response processing. |
## Logs
In the following example, the JSON payload in requests and responses is validate
| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A | | max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). | Yes | N/A | | size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
| type | Content type to execute body validation for, checked against the `Content-Type` header. This value is case insensitive. If empty, it applies to every content type specified in the API schema. | No | N/A | | validate-as | Validation engine to use for validation of the body of a request or response with a matching content type. Currently, the only supported value is "json". | Yes | N/A | | action | [Action](#actions) to perform for requests or responses whose body doesn't match the specified content type. | Yes | N/A |
In this example, all query and path parameters are validated in the prevention m
| -- | - | -- | - | | specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A | | unspecified-parameter-action | [Action](#actions) to perform for request parameters that are not specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
| name | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A | | action | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
The `validate-headers` policy validates the response headers against the API sch
| -- | - | -- | - | | specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A | | unspecified-header-action | [Action](#actions) to perform for response headers that are not specified in the API schema. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
| name | Name of the header to override validation action for. This value is case insensitive. | Yes | N/A | | action | [Action](#actions) to perform for header with the matching name. If the header is specified in the API schema, this value overrides value of `specified-header-action` in the `validate-headers` element. Otherwise, it overrides value of `unspecified-header-action` in the validate-headers element. | Yes | N/A |
The `validate-status-code` policy validates the HTTP status codes in responses a
| Name | Description | Required | Default | | -- | - | -- | - | | unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that are not specified in the API schema. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
| code | HTTP status code to override validation action for. | Yes | N/A | | action | [Action](#actions) to perform for the matching status code, which is not specified in the API schema. If the status code is specified in the API schema, this override does not take effect. | Yes | N/A |
The following table lists all possible errors of the validation policies.
* **Details** - Can be used to investigate errors. Not meant to be shared publicly. * **Public response** - Error returned to the client. Does not leak implementation details.
-| **Name** | **Type** | **Validation rule** | **Details** | **Public response** | **Action** |
+When a validation policy specifies the `prevent` action and produces an error, the response from API management includes an HTTP status code: 400 when the the policy is applied in the inbound section, and 502 when the policy is applied in the outbound section.
++
+| **Name** | **Type** | **Validation rule** | **Details** | **Public response** | **Action** |
|-|-||||-| | **validate-content** | | | | | | | |RequestBody | SizeLimit | Request's body is {size} bytes long and it exceeds the configured limit of {maxSize} bytes. | Request's body is {size} bytes long and it exceeds the limit of {maxSize} bytes. | detect / prevent |
The following table lists all possible errors of the validation policies.
| {messageContentType} | ResponseBody | IncorrectMessage | Body of the response does not conform to the definition {definitionName}, which is associated with the content type {messageContentType}.<br/><br/>{valError.Message} Line: {valError.LineNumber}, Position: {valError.LinePosition} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent | | | RequestBody | ValidationException | Body of the request cannot be validated for the content type {messageContentType}.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent | | | ResponseBody | ValidationException | Body of the response cannot be validated for the content type {messageContentType}.<br/><br/>{exception details} | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
-| **validate-parameter / validate-headers** | | | | | |
+| **validate-parameters / validate-headers** | | | | | |
| {paramName} / {headerName} | QueryParameter / PathParameter / RequestHeader | Unspecified | Unspecified {path parameter / query parameter / header} {paramName} is not allowed. | Unspecified {path parameter / query parameter / header} {paramName} is not allowed. | detect / prevent | | {headerName} | ResponseHeader | Unspecified | Unspecified header {headerName} is not allowed. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent | | |ApiSchema | | API's schema doesn't exist or it couldn't be resolved. | The request could not be processed due to an internal error. Contact the API owner. | detect / prevent |
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/private-link-security.md
You can start runbooks by doing a POST on the webhook URL. For example, the URL
The user Hybrid Runbook Worker feature of Azure Automation enables you to run runbooks directly on the Azure or non-Azure machine, including servers registered with Azure Arc enabled servers. From the machine or server that's hosting the role, you can run runbooks directly on it and against resources in the environment to manage those local resources.
-A JRDS endpoint is used by the hybrid worker to start/stop runbooks, download the runbooks to the worker, and to send the job log stream back to the Automation service. After enabling JRDS endpoint, the URL would look like this: `https://<automationaccountID>.jobruntimedata.<region>.azure-automation.net`. This would ensure runbook execution on the hybrid worker connected to Azure Virtual Network is able to execute jobs without the need to open an outbound connection to the Internet.
+A JRDS endpoint is used by the hybrid worker to start/stop runbooks, download the runbooks to the worker, and to send the job log stream back to the Automation service. After enabling JRDS endpoint, the URL would look like this: `https://<automationaccountID>.jrds.<region>.privatelink.azure-automation.net`. This would ensure runbook execution on the hybrid worker connected to Azure Virtual Network is able to execute jobs without the need to open an outbound connection to the Internet.
> [!NOTE] >With the current implementation of Private Links for Azure Automation, it only supports running jobs on the Hybrid Runbook Worker connected to an Azure virtual network and does not support cloud jobs.
automation Automation Tutorial Runbook Graphical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-graphical.md
Title: Create a graphical runbook in Azure Automation
-description: This article teaches you to create, test, and publish a simple graphical runbook in Azure Automation.
-keywords: runbook, runbook template, runbook automation, azure runbook
+description: This article teaches you to create, test, and publish a graphical runbook in Azure Automation.
Previously updated : 09/15/2020 Last updated : 07/16/2021
+# Customer intent: As an administrator, I want to utilize Runbooks to automate certain aspects of my environment.
+ # Tutorial: Create a graphical runbook
-This tutorial walks you through the creation of a [graphical runbook](../automation-runbook-types.md#graphical-runbooks) in Azure Automation. You can create and edit graphical and graphical PowerShell Workflow runbooks using the graphical editor in the Azure portal.
+This tutorial walks you through the creation of a [graphical runbook](../automation-runbook-types.md#graphical-runbooks) in Azure Automation. You can create and edit graphical PowerShell Workflow runbooks using the graphical editor in the Azure portal.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Run and track the status of the runbook job > * Update the runbook to start an Azure virtual machine, with runbook parameters and conditional links
-## Prerequisites
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-To complete this tutorial, you need the following:
+## Prerequisites
-* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free).
-* [Automation account](../index.yml) to hold the runbook and authenticate to Azure resources. This account must have permission to start and stop the virtual machine.
-* An Azure virtual machine. Since you stop and start this machine, it shouldn't be a production VM.
-* If necessary, [import Azure modules](../shared-resources/modules.md) or [update modules](../automation-update-azure-modules.md) based on the cmdlets that you use.
+* [Automation account](../index.yml) with an [Azure Run as account](../create-run-as-account.md) to hold the runbook and authenticate to Azure resources. This account must have permission to start and stop the virtual machine.
+* PowerShell modules **Az.Accounts** and **Az.Compute** for the Automation Account. For more information, see [Manage modules in Azure Automation](../shared-resources/modules.md).
+* An [Azure virtual machine](../../virtual-machines/windows/quick-create-portal.md) (VM). Since you stop and start this machine, it shouldn't be a production VM. Begin with the VM **stopped**.
-## Step 1 - Create runbook
+## Create runbook
Start by creating a simple runbook that outputs the text `Hello World`.
-1. In the Azure portal, open your Automation account.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
- The Automation account page gives you a quick view of the resources in this account. You should already have some assets. Most of those assets are the modules automatically included in a new Automation account. You should also have the Credential asset associated with your subscription.
+1. In the Azure portal, navigate to your Automation account.
-2. Select **Runbooks** under **Process Automation** to open the list of runbooks.
+1. Under **Process Automation**, select **Runbooks** to open the **Runbooks** page.
-3. Create a new runbook by selecting **Create a runbook**.
+1. Select **Create a runbook** to open the **Create a runbook** page.
-4. Give the runbook the name **MyFirstRunbook-Graphical**.
+1. Name the runbook `MyFirstRunbook-Graphical`.
-5. In this case, you're going to create a [graphical runbook](../automation-graphical-authoring-intro.md). Select **Graphical** for **Runbook type**.
+1. From the **Runbook type** drop-down menu, select **Graphical**.
- ![New runbook](../media/automation-tutorial-runbook-graphical/create-new-runbook.png)
+ :::image type="content" source="../media/automation-tutorial-runbook-graphical/create-graphical-runbook.png" alt-text="Create a runbook input page.":::
-6. Click **Create** to create the runbook and open the graphical editor.
+1. Select **Create** to create the runbook and open the graphical editor, the **Edit Graphical Runbook** page.
-## Step 2 - Add activities
+## Add activities
-The Library control on the left side of the editor allows you to select activities to add to your runbook. You're going to add a `Write-Output` cmdlet to output text from the runbook.
+The left-side of the editor is the **Library control**. The center is the **Canvas**. The right-side is the **Configuration control**. The **Library control** allows you to select activities to add to your runbook. You're going to add a `Write-Output` cmdlet to output text from the runbook.
-1. In the Library control, click in the search field and type `write-output`. Search results are shown in the following image.
+1. In the **Library control** search field, enter `Write-Output`.
![Microsoft.PowerShell.Utility](../medilet-writeoutput.png)
-2. Scroll down to the bottom of the list. Right-click **Write-Output** and select **Add to canvas**. Alternatively, you can click the ellipsis (...) next to the cmdlet name and then select **Add to canvas**.
+1. Scroll down to the bottom of the list. Right-click **Write-Output** and select **Add to canvas**. You could also select the ellipsis (...) next to the cmdlet name and then select **Add to canvas**.
-3. Click the **Write-Output** activity on the canvas. This action opens the Configuration control page, which allows you to configure the activity.
+1. From **Canvas**, select the **Write-Output** activity. This action populates the **Configuration control** page, which allows you to configure the activity.
-4. The **Label** field defaults to the name of the cmdlet, but you can change it to something more friendly. Change it to `Write Hello World to output`.
+1. From **Configuration control**, the **Label** field defaults to the name of the cmdlet, but you can change it to something more friendly. Change it to `Write Hello World to output`.
-5. Click **Parameters** to provide values for the cmdlet's parameters.
+1. Select **Parameters** to provide values for the cmdlet's parameters.
Some cmdlets have multiple parameter sets, and you need to select which one to use. In this case, `Write-Output` has only one parameter set.
-6. Select the `InputObject` parameter. This is the parameter that you use to specify the text to send to the output stream.
+1. From the **Activity Parameter Configuration** page, select the `INPUTOBJECT` parameter to open the **Parameter Value** page. You use this parameter to specify the text to send to the output stream.
-7. The **Data source** dropdown menu provides sources that you can use to populate a parameter value. In this menu, select **PowerShell expression**.
+1. The **Data source** drop-down menu provides sources that you can use to populate a parameter value. In this menu, select **PowerShell expression**.
You can use output from such sources as another activity, an Automation asset, or a PowerShell expression. In this case, the output is just `Hello World`. You can use a PowerShell expression and specify a string.
-8. In the **Expression** field, type `"Hello World"` and then click **OK** twice to return to the canvas.
+1. In the **Expression** text box, enter `"Hello World"` and then select **OK** twice to return to the graphical editor.
-9. Save the runbook by clicking **Save**.
+1. Select **Save** to save the runbook.
-## Step 3 - Test the runbook
+## Test the runbook
Before you publish the runbook to make it available in production, you should test it to make sure that it works properly. Testing a runbook runs its Draft version and allows you to view its output interactively.
-1. Select **Test pane** to open the Test pane.
+1. From the graphical editor, select **Test pane** to open the **Test** pane.
-2. Click **Start** to start the test. This should be the only enabled option.
+1. Select **Start** to start the test.
-3. Note that a [runbook job](../automation-runbook-execution.md) is created and its status is displayed in the pane.
+ A [runbook job](../automation-runbook-execution.md) is created and its status is displayed in the pane. The job status starts as `Queued`, indicating that the job is waiting for a runbook worker in the cloud to become available. The status changes to `Starting` when a worker claims the job. Finally, the status becomes `Running` when the runbook actually starts to run.
- The job status starts as `Queued`, indicating that the job is waiting for a runbook worker in the cloud to become available. The status changes to `Starting` when a worker claims the job. Finally, the status becomes `Running` when the runbook actually starts to run.
+ When the runbook job completes, the Test pane displays its output. In this case, you see `Hello World`.
-4. When the runbook job completes, the Test pane displays its output. In this case, you see `Hello World`.
+ :::image type="content" source="../media/automation-tutorial-runbook-graphical/runbook-test-results.png" alt-text="Hello World runbook output.":::
- ![Hello World runbook output](../media/automation-tutorial-runbook-graphical/runbook-test-results.png)
+1. Select **X** in the top-right corner to close the **Test** pane and return to the graphical editor.
-5. Close the Test pane to return to the canvas.
+## Publish and start the runbook
-## Step 4 - Publish and start the runbook
+The runbook that you've created is still in Draft mode and must be published before you can run it in production. When you publish a runbook, you overwrite the existing Published version with the Draft version.
-The runbook that you have created is still in Draft mode. It needs to be published before you can run it in production. When you publish a runbook, you overwrite the existing Published version with the Draft version. In this case, you don't have a Published version yet because you just created the runbook.
+1. From the graphical editor, select **Publish** to publish the runbook and then **Yes** when prompted. You're returned to the **Runbook** Overview page.
-1. Select **Publish** to publish the runbook and then **Yes** when prompted.
+1. From the **Runbook** Overview page, the **Status** value is **Published**.
-2. Scroll left to view the runbook on the Runbooks page, and note that the **Authoring Status** value is set to **Published**.
+1. Select **Start** and then **Yes** when prompted to start the runbook and open the **Job** page.
-3. Scroll back to the right to view the page for **MyFirstRunbook-Graphical**.
+ The options across the top allow you to: start the runbook now, schedule a future start time, or create a [webhook](../automation-webhooks.md) so that the runbook can be started through an HTTP call.
- The options across the top allow you to start the runbook now, schedule a future start time, or create a [webhook](../automation-webhooks.md) so that the runbook can be started through an HTTP call.
+ :::image type="content" source="../media/automation-tutorial-runbook-graphical/published-status.png" alt-text="Overview page and published status.":::
-4. Select **Start** and then **Yes** when prompted to start the runbook.
+1. From the **Job** page, verify that the **Status** field shows **Completed**.
-5. A Job pane is opened for the runbook job that has been created. Verify that the **Job status** field shows **Completed**.
+1. Select **Output** to see `Hello World` displayed.
-6. Click **Output** to open the Output page, where you can see `Hello World` displayed.
+1. Select **All Logs** to view the streams for the runbook job and select the only entry to open the **Job stream details** page. You should only see `Hello World`.
-7. Close the Output page.
+ **All Logs** can show other streams for a runbook job, such as Verbose and Error streams, if the runbook writes to them.
-8. Click **All Logs** to open the Streams pane for the runbook job. You should only see `Hello World` in the output stream.
+1. Close the **Job stream details** page and then the **Job** page to return to the **Runbook** Overview page.
- Note that the Streams pane can show other streams for a runbook job, such as Verbose and Error streams, if the runbook writes to them.
+1. Under **Resources**, select **Jobs** to view all jobs for the runbook. The Jobs page lists all the jobs created by your runbook. You should see only one job listed, since you have only run the job once.
-9. Close the Streams pane and the Job pane to return to the MyFirstRunbook-Graphical page.
+1. Select the job to open the same **Job** page that you viewed when you started the runbook.
-10. To view all the jobs for the runbook, select **Jobs** under **Resources**. The Jobs page lists all the jobs created by your runbook. You should see only one job listed, since you have only run the job once.
+1. Close the **Job** page, and then from the left menu, select **Overview**.
-11. Click the job name to open the same Job pane that you viewed when you started the runbook. Use this pane to view the details of any job created for the runbook.
-
-## Step 5 - Create variable assets
+## Create variable assets
You've tested and published your runbook, but so far it doesn't do anything useful to manage Azure resources. Before configuring the runbook to authenticate, you must create a variable to hold the subscription ID, set up an activity to authenticate, and then reference the variable. Including a reference to the subscription context allows you to easily work with multiple subscriptions.
-1. Copy your subscription ID from the **Subscriptions** option on the Navigation pane.
+1. From **Overview**, select the **Copy to clipboard** icon next to **Subscription ID**.
+
+1. Close the **Runbook** page to return to the **Automation Account** page.
+
+1. Under **Shared Resources**, select **Variables**.
-2. In the Automation Accounts page, select **Variables** under **Shared Resources**.
+1. Select **Add a variable** to open the **New Variable** page.
-3. Select **Add a variable**.
+1. On the **New Variable** page, set the following values:
-4. On the New variable page, make the following settings in the fields provided.
+ | Field| Value|
+ |||
+ |Value|Press <kbd>CTRL+V</kbd> to paste in your subscription ID.|
+ |Name |Enter `AzureSubscriptionId`.|
+ |Type|Keep the default value, **String**.|
+ |Encrypted|Keep the default value, **No**.|
- * **Name** -- enter `AzureSubscriptionId`.
- * **Value** -- enter your subscription ID.
- * **Type** -- keep string selected.
- * **Encryption** -- use the default value.
+1. Select **Create** to create the variable and return to the **Variables** page.
-5. Click **Create** to create the variable.
+1. Under **Process Automation**, select **Runbooks** and then your runbook, **MyFirstRunbook-Graphical**.
-## Step 6 - Add authentication
+## Add authentication
-Now that you have a variable to hold the subscription ID, you can configure the runbook to authenticate with the Run As credentials for your subscription. Do this by adding the Azure Run As connection as an asset. You also must add the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet and the [Set-AzContext](/powershell/module/az.accounts/Set-AzContext) cmdlet to the canvas.
+Now that you have a variable to hold the subscription ID, you can configure the runbook to authenticate with the Run As credentials for your subscription. Configure authentication by adding the Azure Run As connection as an asset. You also must add the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet and the [Set-AzContext](/powershell/module/az.accounts/Set-AzContext) cmdlet to the canvas.
->[!NOTE]
->For PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for `Connect-AzAccount`. Note that these aliases are not available for your graphical runbooks. A graphical runbook can only use `Connect-AzAccount`itself.
+> [!NOTE]
+> For PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for `Connect-AzAccount`. These aliases are not available for your graphical runbooks. A graphical runbook can only use `Connect-AzAccount`.
-1. Navigate to your runbook and select **Edit** on the MyFirstRunbook-Graphical page.
+1. From your **Runbook** page, select **Edit** to return to the graphical editor.
-2. You don't need the `Write Hello World to output` entry any more. Just click the ellipsis and select **Delete**.
+1. You don't need the `Write Hello World to output` activity anymore. Select the activity and an ellipsis will appear in the top-right corner of the activity. The ellipsis may be difficult to see. Select the ellipsis and then select **Delete**.
-3. In the Library control, expand **ASSETS**, then **Connections**. Add `AzureRunAsConnection` to the canvas by selecting **Add to canvas**.
+1. From **Library control**, navigate to **ASSETS** > **Connections** > **AzureRunAsConnection**. Select the ellipsis and then select **Add to canvas**.
-4. Rename `AzureRunAsConnection` to `Get Run As Connection`.
+1. From **Configuration control**, change the **Label** value from `Get-AutomationConnection` to `Get Run As Connection`.
-5. In the Library control, type `Connect-AzAccount` in the search field.
+1. From the **Library control** search field, enter `Connect-AzAccount`.
-6. Add `Connect-AzAccount` to the canvas.
+1. Add `Connect-AzAccount` to the canvas, and drag the activity below `Get Run As Connection`.
-7. Hover over `Get Run As Connection` until a circle appears on the bottom of the shape. Click the circle and drag the arrow to `Connect-AzAccount` to form a link. The runbook starts with `Get Run As Connection` and then runs `Connect-AzAccount`.
+1. Hover over `Get Run As Connection` until a circle appears on the bottom of the shape. Select and hold the circle and an arrow will appear. Drag the arrow to `Connect-AzAccount` to form a link. The runbook starts with `Get Run As Connection` and then runs `Connect-AzAccount`.
![Create link between activities](../media/automation-tutorial-runbook-graphical/runbook-link-auth-activities.png)
-8. On the canvas, select `Connect-AzAccount`. In the Configuration control pane, type **Login to Azure** in the **Label** field.
+1. From **Canvas**, select `Connect-AzAccount`.
-9. Click **Parameters**, and the Activity Parameter Configuration page appears.
+1. From **Configuration control**, change **Label** from `Connect-AzAccount` to `Login to Azure`.
-10. The `Connect-AzAccount` cmdlet has multiple parameter sets, and you need to select one before providing parameter values. Click **Parameter Set** and then select **ServicePrincipalCertificateWithSubscriptionId**.
+1. Select **Parameters** to open the **Activity Parameter Configuration** page.
-11. The parameters for this parameter set are displayed on the Activity Parameter Configuration page. Click **APPLICATIONID**.
+1. The `Connect-AzAccount` cmdlet has multiple parameter sets, and you need to select one before providing parameter values. Select **Parameter Set** and then select **ServicePrincipalCertificateWithSubscriptionId**. Be careful to not select **ServicePrincipalCertificateFileWithSubscriptionId**, as the names are similar
+
+ The parameters for this parameter set are displayed on the **Activity Parameter Configuration** page.
![Add Azure account parameters](../media/automation-tutorial-runbook-graphical/Add-AzureRmAccount-params.png)
-12. On the Parameter Value page, make the following settings and then click **OK**.
+1. Select **CERTIFICATETHUMBPRINT** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Activity output**.
+ 1. From **Select data**, select **Get Run As Connection**.
+ 1. In the **Field path** text box, enter `CertificateThumbprint`.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
- * **Data source** -- select **Activity output**.
- * Data source list -- select **Get Automation Connection**.
- * **Field path** -- type `ApplicationId`. You're specifying the name of the property for the field path because the activity outputs an object with multiple properties.
+1. Select **SERVICEPRINCIPAL** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Constant value**.
+ 1. Select the option **True**.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-13. Click **CERTIFICATETHUMBPRINT**, and on the Parameter Value page, make the following settings and then click **OK**.
+1. Select **TENANT** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Activity output**.
+ 1. From **Select data**, select **Get Run As Connection**.
+ 1. In the **Field path** text box, enter `TenantId`.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
- * **Data source** -- select **Activity output**.
- * Data source list -- select **Get Automation Connection**.
- * **Field path** -- type `CertificateThumbprint`.
+1. Select **APPLICATIONID** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Activity output**.
+ 1. From **Select data**, select **Get Run As Connection**.
+ 1. In the **Field path** text box, enter `ApplicationId`.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-14. Click **SERVICEPRINCIPAL**, and on the Parameter Value page, select **ConstantValue** for the **Data source** field. Click the option **True**, and then click **OK**.
+1. Select **OK** to return to the graphical editor.
-15. Click **TENANTID**, and make the following settings on the Parameter Value page. When finished, click **OK** twice.
+1. From the **Library control** search field, enter `Set-AzContext`.
- * **Data source** -- select **Activity output**.
- * Data source list -- select **Get Automation Connection**.
- * **Field path** -- type `TenantId`.
+1. Add `Set-AzContext` to the canvas, and drag the activity below `Login to Azure`.
-16. In the Library control, type `Set-AzContext` in the search field.
+1. From **Configuration control**, change **Label** from `Set-AzContext` to `Specify Subscription Id`.
-17. Add `Set-AzContext` to the canvas.
+1. Select **Parameters** to open the **Activity Parameter Configuration** page.
-18. Select `Set-AzContext` on the canvas. In the Configuration control pane, enter `Specify Subscription Id` in the **Label** field.
+1. The `Set-AzContext` cmdlet has multiple parameter sets, and you need to select one before providing parameter values. Select **Parameter Set** and then select **Subscription**. The parameters for this parameter set are displayed on the **Activity Parameter Configuration** page.
-19. Click **Parameters** and the Activity Parameter Configuration page appears.
+1. Select **SUBSCRIPTION** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Variable asset**.
+ 1. From the list of variables, select **AzureSubscriptionId**.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-20. The `Set-AzContext` cmdlet has multiple parameter sets, and you need to select one before providing parameter values. Click **Parameter Set** and then select **SubscriptionId**.
+1. Select **OK** to return to the graphical editor.
-21. The parameters for this parameter set are displayed on the Activity Parameter Configuration page. Click **SubscriptionID**.
+1. Form a link between `Login to Azure` and `Specify Subscription Id`. Your runbook should look like the following at this point.
-22. On the Parameter Value page, select **Variable Asset** for the **Data source** field and select **AzureSubscriptionId** from the source list. When finished, click **OK** twice.
+ :::image type="content" source="../media/automation-tutorial-runbook-graphical/runbook-auth-config.png" alt-text="Screenshot of the runbook after dragging the arrow to 'Specify Subscription ID'.":::
-23. Hover over `Login to Azure` until a circle appears on the bottom of the shape. Click the circle and drag the arrow to `Specify Subscription Id`. Your runbook should look like the following at this point.
+## Add activity to start a virtual machine
- :::image type="content" source="../media/automation-tutorial-runbook-graphical/runbook-auth-config.png" alt-text="Screenshot of the runbook after dragging the arrow to 'Specify Subscription ID'.":::
+Now you must add a `Start-AzVM` activity to start a virtual machine. You can pick any VM in your Azure subscription, and for now you're hard-coding its name into the [Start-AzVM](/powershell/module/az.compute/start-azvm) cmdlet.
-## Step 7 - Add activity to start a virtual machine
+1. From the **Library control** search field, enter `Start-AzVM`.
-Now you must add a `Start-AzVM` activity to start a virtual machine. You can pick any VM in your Azure subscription, and for now you are hardcoding its name into the [Start-AzVM](/powershell/module/az.compute/start-azvm) cmdlet.
+1. Add `Start-AzVM` to the canvas, and drag the activity below `Specify Subscription Id`.
-1. In the Library control, type `Start-Az` in the search field.
+1. From **Configuration control**, select **Parameters** to open the **Activity Parameter Configuration** page.
-2. Add `Start-AzVM` to the canvas and then click and drag it underneath `Specify Subscription Id`.
+1. Select **Parameter Set** and then select **ResourceGroupNameParameterSetName**. The parameters for this parameter set are displayed on the **Activity Parameter Configuration** page. The parameters **RESOURCEGROUPNAME** and **NAME** have exclamation marks next to them to indicate that they're required parameters. Both fields expect string values.
-3. Hover over `Specify Subscription Id` until a circle appears on the bottom of the shape. Click the circle and drag the arrow to `Start-AzVM`.
+1. Select **RESOURCEGROUPNAME** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **PowerShell expression**.
+ 1. In the **Expression** text box, enter the name of your resource group in double quotes.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-4. Select `Start-AzVM`. Click **Parameters** and then **Parameter Set** to view the sets for the activity.
+1. Select **NAME** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **PowerShell expression**.
+ 1. In the **Expression** text box, enter the name of your virtual machine in double quotes.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-5. Select **ResourceGroupNameParameterSetName** for the parameter set. The fields **ResourceGroupName** and **Name** have exclamation marks next to them to indicate that they are required parameters. Note that both fields expect string values.
+1. Select **OK** to return to the graphical editor.
-6. Select **Name**. Choose **PowerShell expression** for the **Data source** field. For the VM that you use to start this runbook, type in the machine name surrounded with double quotes. Click **OK**.
+1. Form a link between `Specify Subscription Id` and `Start-AzVM`. Your runbook should look like the following at this point.
-7. Select **ResourceGroupName**. Use the value **PowerShell expression** for the **Data source** field, and type in the name of the resource group surrounded with double quotes. Click **OK**.
+ ![Runbook Start-AzVM output](../media/automation-tutorial-runbook-graphical/runbook-startvm.png)
-8. Click **Test pane** so that you can test the runbook.
+1. Select **Test pane** so that you can test the runbook.
-9. Click **Start** to begin the test. Once it completes, make sure that the VM has started. Your runbook should look like the following at this point.
+1. Select **Start** to begin the test.
- ![Runbook Start-AzVM output](../media/automation-tutorial-runbook-graphical/runbook-startvm.png)
+1. Once it completes, make sure that the VM has started. Then stop the VM for later steps.
+
+1. Return to the graphical editor for your runbook.
-## Step 8 - Add additional input parameters
+## Add input parameters
Your runbook currently starts the VM in the resource group that you specified for the `Start-AzVM` cmdlet. The runbook will be more useful if you specify both name and resource group when the runbook is started. Let's add input parameters to the runbook to provide that functionality.
-1. Open the graphical editor by clicking **Edit** on the MyFirstRunbook-Graphical page.
+1. From the graphical editor top menu bar, select **Input and output**.
-2. Select **Input and output** and then **Add input** to open the Runbook Input Parameter pane.
+1. Select **Add input** to open the **Runbook Input Parameter** page.
-3. Make the following settings in the provided fields and then click **OK**.
- * **Name** -- specify `VMName`.
- * **Type** -- keep the string setting.
- * **Mandatory** -- change the value to **Yes**.
+1. On the **Runbook Input Parameter** page, set the following values:
-4. Create a second mandatory input parameter called `ResourceGroupName` and then click **OK** to close the Input and Output pane.
+ | Field| Value|
+ |||
+ |Name| Enter `VMName`.|
+ |Type|Keep the default value, **String**.|
+ |Mandatory|Change the value to **Yes**.|
+
+1. Select **OK** to return to the **Input and Output** page
+
+1. Select **Add input** to re-open the **Runbook Input Parameter** page.
+
+1. On the **Runbook Input Parameter** page, set the following values:
+
+ | Field| Value|
+ |||
+ |Name| Enter `ResourceGroupName`.|
+ |Type|Keep the default value, **String**.|
+ |Mandatory|Change the value to **Yes**.|
+
+1. Select **OK** to return to the **Input and Output** page. The page may look similar to the following:
![Runbook Input Parameters](../media/automation-tutorial-runbook-graphical/start-azurermvm-params-outputs.png)
-5. Select the `Start-AzVM` activity and then click **Parameters**.
+1. Select **OK** to return to the graphical editor.
+
+1. The new inputs may not be immediately available. Select **Save**, close the graphical editor, and then re-open the graphical editor. The new inputs should now be available.
-6. Change the **Data source** field for **Name** to **Runbook input**. Then select **VMName**.
+1. Select the `Start-AzVM` activity and then select **Parameters** to open the **Activity Parameter Configuration** page.
-7. Change the **Data source** field for **ResourceGroupName** to **Runbook input** and then select **ResourceGroupName**.
+1. For the previously configured parameter, **RESOURCEGROUPNAME**, change the **Data source** to **Runbook input**, and then select **ResourceGroupName**. Select **OK**.
+
+1. For the previously configured parameter, **NAME**, change the **Data source** to **Runbook input**, and then select **VMName**. Select **OK**. The page may look similar to the following:
![Start-AzVM Parameters](../media/automation-tutorial-runbook-graphical/start-azurermvm-params-runbookinput.png)
-8. Save the runbook and open the Test pane. You can now provide values for the two input variables that you use in the test.
+1. Select **OK** to return to the graphical editor.
+
+1. Select **Save** and then **Test pane**. Observe that you can now provide values for the two input variables you created.
-9. Close the Test pane.
+1. Close the **Test** page to return to the graphical editor.
-10. Click **Publish** to publish the new version of the runbook.
+1. Select **Publish** and then **Yes** when you're prompted to publish the new version of the runbook. You're returned to the **Runbook** Overview page.
-11. Stop the VM that you started previously.
+1. Select **Start** to open the **Start Runbook** page.
-12. Click **Start** to start the runbook. Type in the values for `VMName` and `ResourceGroupName` for the VM that you're going to start.
+1. Enter appropriate values for the parameters `VMNAME` and `RESOURCEGROUPNAME`. Then select **OK**. The **Job** page then opens.
-13. When the runbook completes, ensure that the VM has been started.
+1. Monitor the job and verify the VM started after the **Status** turns to **Complete**. Then stop the VM for later steps.
-## Step 9 - Create a conditional link
+1. Return to the graphical editor for your runbook.
-You can now modify the runbook so that it only attempts to start the VM if it is not already started. Do this by adding a [Get-AzVM](/powershell/module/Az.Compute/Get-AzVM) cmdlet that retrieves the instance-level status of the VM. Then you can add a PowerShell Workflow code module called `Get Status` with a snippet of PowerShell code to determine if the VM state is running or stopped. A conditional link from the `Get Status` module only runs `Start-AzVM` if the current running state is stopped. At the end of this procedure, your runbook uses the `Write-Output` cmdlet to output a message to inform you if the VM was successfully started.
+## Create a conditional link
-1. Open **MyFirstRunbook-Graphical** in the graphical editor.
+You can now modify the runbook so that it only attempts to start the VM if it's not already started. Do this by adding a [Get-AzVM](/powershell/module/Az.Compute/Get-AzVM) cmdlet that retrieves the instance-level status of the VM. Then you can add a PowerShell Workflow code module called `Get Status` with a snippet of PowerShell code to determine if the VM state is running or stopped. A conditional link from the `Get Status` module only runs `Start-AzVM` if the current running state is stopped. At the end of this procedure, your runbook uses the `Write-Output` cmdlet to output a message to inform you if the VM was successfully started.
-2. Remove the link between `Specify Subscription Id` and `Start-AzVM` by clicking on it and then pressing **Delete**.
+1. From the graphical editor, right-click the link between `Specify Subscription Id` and `Start-AzVM` and select **Delete**.
-3. In the Library control, type `Get-Az` in the search field.
+1. From the **Library control** search field, enter `Get-AzVM`.
-4. Add `Get-AzVM` to the canvas.
+1. Add `Get-AzVM` to the canvas, and drag the activity below `Specify Subscription Id`.
-5. Select `Get-AzVM` and then click **Parameter Set** to view the sets for the cmdlet.
+1. From **Configuration control**, select **Parameters** to open the **Activity Parameter Configuration** page.
-6. Select the **GetVirtualMachineInResourceGroupNameParamSet** parameter set. The **ResourceGroupName** and **Name** fields have exclamation marks next to them, indicating that they specify required parameters. Note that both fields expect string values.
+ Select **Parameter Set** and then select **GetVirtualMachineInResourceGroupParamSet**. The parameters for this parameter set are displayed on the **Activity Parameter Configuration** page. The parameters **RESOURCEGROUPNAME** and **NAME** have exclamation marks next to them to indicate that they're required parameters. Both fields expect string values.
-7. Under **Data source** for **Name**, select **Runbook input**, then **VMName**. Click **OK**.
+1. Select **RESOURCEGROUPNAME** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Runbook input**.
+ 1. Select the parameter **ResourceGroupName**.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-8. Under **Data source** for **ResourceGroupName**, select **Runbook input**, then **ResourceGroupName**. Click **OK**.
+1. Select **NAME** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Runbook input**.
+ 1. Select the parameter **VMName**.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-9. Under **Data source** for **Status**, select **Constant value**, then **True**. Click **OK**.
+1. Select **STATUS** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **Constant value**.
+ 1. Select the option **True**.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
-10. Create a link from `Specify Subscription Id` to `Get-AzVM`.
+1. Select **OK** to return to the graphical editor.
-11. In the Library control, expand **Runbook Control** and add **Code** to the canvas.
+1. Form a link between `Specify Subscription Id` and `Get-AzVM`.
-12. Create a link from `Get-AzVM` to `Code`.
+1. Clear the **Library control** search field, and then navigate to **RUNBOOK CONTROL** > **Code**. Select the ellipsis and then **Add to canvas**. Drag the activity below `Get-AzVM`.
-13. Click `Code` and, in the Configuration pane, change the label to **Get Status**.
+1. From **Configuration control**, change **Label** from `Code` to `Get Status`.
-14. Select `Code` and the Code Editor page appears.
+1. From **Configuration control**, select **Code** to open the **Code Editor** page.
-15. Paste the following code snippet into the editor page.
+1. Paste the following code snippet into the **PowerShell code** text box.
```powershell $Statuses = $ActivityOutput['Get-AzVM'].Statuses
You can now modify the runbook so that it only attempts to start the VM if it is
$StatusOut ```
-16. Create a link from `Get Status` to `Start-AzVM`.
+1. Select **OK** to return to the graphical editor.
+
+1. Form a link between `Get-AzVM` and `Get Status`.
+
+1. Form a link between `Get Status` and `Start-AzVM`. Your runbook should look like the following at this point.
![Runbook with Code Module](../media/automation-tutorial-runbook-graphical/runbook-startvm-get-status.png)
-17. Select the link and, in the Configuration pane, change **Apply condition** to **Yes**. Note that the link becomes a dashed line, indicating that the target activity only runs if the condition resolves to true.
+1. Select the link between `Get Status` and `Start-AzVM`.
+
+1. From **Configuration control**, change **Apply condition** to **Yes**. The link becomes a dashed line, indicating that the target activity only runs if the condition resolves to true.
+
+1. For **Condition expression**, enter `$ActivityOutput['Get Status'] -eq "Stopped"`. `Start-AzVM` now only runs if the VM is stopped.
-18. For **Condition expression**, type `$ActivityOutput['Get Status'] -eq "Stopped"`. `Start-AzVM` now only runs if the VM is stopped.
+1. From the **Library control** search field, enter `Write-Output`.
-19. In the Library control, expand **Cmdlets** and then **Microsoft.PowerShell.Utility**.
+1. Add `Write-Output` to the canvas, and drag the activity below `Start-AzVM`.
-20. Add `Write-Output` to the canvas twice.
+1. Select the activity ellipsis and select **Duplicate**. Drag the duplicate activity to the right of the first activity.
-21. For the first `Write-Output` control, click **Parameters** and change the **Label** value to **Notify VM Started**.
+1. Select the first `Write-Output` activity.
+ 1. From **Configuration control**, change **Label** from `Write-Output` to `Notify VM Started`.
+ 1. Select **Parameters** to open the **Activity Parameter Configuration** page.
+ 1. Select **INPUTOBJECT** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **PowerShell expression**.
+ 1. In the **Expression** text box, enter `"$VMName successfully started."`.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
+ 1. Select **OK** to return to the graphical editor.
-22. For **InputObject**, change **Data source** to **PowerShell expression**, and type in the expression `$VMName successfully started.`.
+1. Select the first `Write-Output1` activity.
+ 1. From **Configuration control**, change **Label** from `Write-Output1` to `Notify VM Start Failed`.
+ 1. Select **Parameters** to open the **Activity Parameter Configuration** page.
+ 1. Select **INPUTOBJECT** to open the **Parameter Value** page.
+ 1. From the **Data source** drop-down menu, select **PowerShell expression**.
+ 1. In the **Expression** text box, enter `"$VMName could not start."`.
+ 1. Select **OK** to return to the **Activity Parameter Configuration** page.
+ 1. Select **OK** to return to the graphical editor.
-23. On the second `Write-Output` control, click **Parameters** and change the **Label** value to **Notify VM Start Failed**.
+1. Form a link between `Start-AzVM` and `Notify VM Started`.
-24. For **InputObject**, change **Data source** to **PowerShell expression**, and type in the expression `$VMName could not start`.
+1. Form a link between `Start-AzVM` and `Notify VM Start Failed`.
-25. Create links from `Start-AzVM` to `Notify VM Started` and `Notify VM Start Failed`.
+1. Select the link to `Notify VM Started` and change **Apply condition** to **Yes**.
-26. Select the link to `Notify VM Started` and change **Apply condition** to true.
+1. For the **Condition expression**, type `$ActivityOutput['Start-AzVM'].IsSuccessStatusCode -eq $true`. This `Write-Output` control now only runs if the VM starts successfully.
-27. For the **Condition expression**, type `$ActivityOutput['Start-AzVM'].IsSuccessStatusCode -eq $true`. This `Write-Output` control now only runs if the VM starts successfully.
+1. Select the link to `Notify VM Start Failed`.
-28. Select the link to `Notify VM Start Failed` and change **Apply condition** to true.
+1. From the **Control page**, for **Apply condition**, select **Yes**.
-29. For the **Condition expression** field, type `$ActivityOutput['Start-AzVM'].IsSuccessStatusCode -ne $true`. This `Write-Output` control now only runs if the VM is not successfully started. Your runbook should look like the following image.
+1. In the **Condition expression** text box, enter `$ActivityOutput['Start-AzVM'].IsSuccessStatusCode -ne $true`. This `Write-Output` control now only runs if the VM isn't successfully started. Your runbook should look like the following image.
![Runbook with Write-Output](../media/automation-tutorial-runbook-graphical/runbook-startazurermvm-complete.png)
-30. Save the runbook and open the Test pane.
+1. Save the runbook and open the Test pane.
-31. Start the runbook with the VM stopped, and the machine should start.
+1. Start the runbook with the VM stopped, and the machine should start.
## Next steps
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/pre-post-scripts.md
Title: Manage pre-scripts and post-scripts in your Update Management deployment
description: This article tells how to configure and manage pre-scripts and post-scripts for update deployments. Previously updated : 03/08/2021 Last updated : 07/20/2021
In addition to your standard runbook parameters, the `SoftwareUpdateConfiguratio
### SoftwareUpdateConfigurationRunContext properties
-|Property |Description |
-|||
-|SoftwareUpdateConfigurationName | The name of the software update configuration. |
-|SoftwareUpdateConfigurationRunId | The unique ID for the run. |
-|SoftwareUpdateConfigurationSettings | A collection of properties related to the software update configuration. |
-|SoftwareUpdateConfigurationSettings.operatingSystem | The operating systems targeted for the update deployment. |
-|SoftwareUpdateConfigurationSettings.duration | The maximum duration of the update deployment run as `PT[n]H[n]M[n]S` as per ISO8601; also called the maintenance window. |
-|SoftwareUpdateConfigurationSettings.Windows | A collection of properties related to Windows computers. |
-|SoftwareUpdateConfigurationSettings.Windows.excludedKbNumbers | A list of KBs that are excluded from the update deployment. |
-|SoftwareUpdateConfigurationSettings.Windows.includedUpdateClassifications | Update classifications selected for the update deployment. |
-|SoftwareUpdateConfigurationSettings.Windows.rebootSetting | Reboot settings for the update deployment. |
-|azureVirtualMachines | A list of resourceIds for the Azure VMs in the update deployment. |
-|nonAzureComputerNames|A list of the non-Azure computers FQDNs in the update deployment.|
-
-The following example is a JSON string passed in to the **SoftwareUpdateConfigurationRunContext** parameter:
+|Property |Type |Description |
+||||
+|SoftwareUpdateConfigurationName |String | The name of the software update configuration. |
+|SoftwareUpdateConfigurationRunId |GUID | The unique ID for the run. |
+|SoftwareUpdateConfigurationSettings || A collection of properties related to the software update configuration. |
+|SoftwareUpdateConfigurationSettings.OperatingSystem |Int | The operating systems targeted for the update deployment. `1` = Windows and `2` = Linux |
+|SoftwareUpdateConfigurationSettings.Duration |Timespan (HH:MM:SS) | The maximum duration of the update deployment run as `PT[n]H[n]M[n]S` as per ISO8601; also called the maintenance window.<br> Example: 02:00:00 |
+|SoftwareUpdateConfigurationSettings.WindowsConfiguration || A collection of properties related to Windows computers. |
+|SoftwareUpdateConfigurationSettings.WindowsConfiguration.excludedKbNumbers |String | A space separated list of KBs that are excluded from the update deployment. |
+|SoftwareUpdateConfigurationSettings.WindowsConfiguration.includedKbNumbers |String | A space separated list of KBs that are included with the update deployment. |
+|SoftwareUpdateConfigurationSettings.WindowsConfiguration.UpdateCategories |Integer | 1 = "Critical";<br> 2 = "Security"<br> 4 = "UpdateRollUp"<br> 8 = "FeaturePack"<br> 16 = "ServicePack"<br> 32 = "Definition"<br> 64 = "Tools"<br> 128 = "Updates" |
+|SoftwareUpdateConfigurationSettings.WindowsConfiguration.rebootSetting |String | Reboot settings for the update deployment. Values are `IfRequired`, `Never`, `Always` |
+|SoftwareUpdateConfigurationSettings.LinuxConfiguration || A collection of properties related to Linux computers. |
+|SoftwareUpdateConfigurationSettings.LinuxConfiguration.IncludedPackageClassifications |Integer |0 = "Unclassified"<br> 1 = "Critical"<br> 2 = "Security"<br> 4 = "Other"|
+|SoftwareUpdateConfigurationSettings.LinuxConfiguration.IncludedPackageNameMasks |String | A space separated list of package names that are included with the update deployment. |
+|SoftwareUpdateConfigurationSettings.LinuxConfiguration.ExcludedPackageNameMasks |String |A space separated list of package names that are excluded from the update deployment. |
+|SoftwareUpdateConfigurationSettings.LinuxConfiguration.RebootSetting |String |Reboot settings for the update deployment. Values are `IfRequired`, `Never`, `Always` |
+|SoftwareUpdateConfiguationSettings.AzureVirtualMachines |String array | A list of resourceIds for the Azure VMs in the update deployment. |
+|SoftwareUpdateConfigurationSettings.NonAzureComputerNames|String array |A list of the non-Azure computers FQDNs in the update deployment.|
+
+The following example is a JSON string passed to the **SoftwareUpdateConfigurationSettings** properties for a Linux computer:
+
+```json
+"SoftwareUpdateConfigurationSettings": {
+ "OperatingSystem": 2,
+ "WindowsConfiguration": null,
+ "LinuxConfiguration": {
+ "IncludedPackageClassifications": 7,
+ "ExcludedPackageNameMasks": "fgh xyz",
+ "IncludedPackageNameMasks": "abc bin*",
+ "RebootSetting": "IfRequired"
+ },
+ "Targets": {
+ "azureQueries": null,
+ "nonAzureQueries": ""
+ },
+ "NonAzureComputerNames": [
+ "box1.contoso.com",
+ "box2.contoso.com"
+ ],
+ "AzureVirtualMachines": [
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/vm-01"
+ ],
+ "Duration": "02:00:00",
+ "PSComputerName": "localhost",
+ "PSShowComputerName": true,
+ "PSSourceJobInstanceId": "2477a37b-5262-4f4f-b636-3a70152901e9"
+ }
+```
+
+The following example is a JSON string passed to the **SoftwareUpdateConfigurationSettings** properties for a Windows computer:
```json "SoftwareUpdateConfigurationRunContext": {
The following example is a JSON string passed in to the **SoftwareUpdateConfigur
"SoftwareUpdateConfigurationRunId": "00000000-0000-0000-0000-000000000000", "SoftwareUpdateConfigurationSettings": { "operatingSystem": "Windows",
- "duration": "PT2H0M",
+ "duration": "02:00:00",
"windows": { "excludedKbNumbers": [ "168934",
The following example is a JSON string passed in to the **SoftwareUpdateConfigur
"rebootSetting": "IfRequired" }, "azureVirtualMachines": [
- "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresources/providers/Microsoft.Compute/virtualMachines/vm-01",
- "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresources/providers/Microsoft.Compute/virtualMachines/vm-02",
- "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresources/providers/Microsoft.Compute/virtualMachines/vm-03"
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/vm-01",
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/vm-02",
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/vm-03"
], "nonAzureComputerNames": [ "box1.contoso.com",
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/scripts/cli-work-with-keys.md
az appconfig kv set --name $appConfigName --key $newKey --value "Value 2"
az appconfig kv list --name $appConfigName # Create a new key-value referencing a value stored in Azure Key Vault
-az appconfig kv set --name $appConfigName --key $refKey --content-type "application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8" --value "{\"uri\":\"$uri\"}"
+az appconfig kv set-keyvault --name $appConfigName --key $refKey --secret-identifier $uri
# List current key-values az appconfig kv list --name $appConfigName # Update Key Vault reference
-az appconfig kv set --name $appConfigName --key $refKey --value "{\"uri\":\"$uri2\"}"
+az appconfig kv set-keyvault --name $appConfigName --key $refKey --secret-identifier $uri2
# List current key-values az appconfig kv list --name $appConfigName
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
The RDB persistence backup frequency interval doesn't start until the previous b
### What happens to the old RDB backups when a new backup is made?
-All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely.
+All RDB persistence backups, except for the most recent one, are automatically deleted. This deletion might not happen immediately, but older backups aren't persisted indefinitely. Note that if soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to reside in the soft delete state.
### When should I use a second storage account?
Data stored in AOF files is divided into multiple page blobs per node to increas
When clustering is enabled, each shard in the cache has its own set of page blobs, as indicated in the previous table. For example, a P2 cache with three shards distributes its AOF file across 24 page blobs (eight blobs per shard, with three shards).
-After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes.
+After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. Note that if soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to reside in the soft delete state.
### Will I be charged for the storage being used in Data Persistence?
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
Title: Alert schema definitions in Azure Monitor
description: Understanding the common alert schema definitions for Azure Monitor Previously updated : 04/12/2021 Last updated : 07/20/2021 # Common alert schema definitions
Any alert instance describes the resource that was affected and the cause of the
"alertTargetIDs": [ "/subscriptions/<subscription ID>/resourcegroups/pipelinealertrg/providers/microsoft.compute/virtualmachines/wcus-r2-gen2" ],
+ "configurationItems": [
+ "wcus-r2-gen2"
+ ],
"originAlertId": "3f2d4487-b0fc-4125-8bd5-7ad17384221e_PipeLineAlertRG_microsoft.insights_metricAlerts_WCUS-R2-Gen2_-117781227", "firedDateTime": "2019-03-22T13:58:24.3713213Z", "resolvedDateTime": "2019-03-22T14:03:16.2246313Z",
Any alert instance describes the resource that was affected and the cause of the
| monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. | | monitoringService | The monitoring service or solution that generated the alert. The fields for the alert context are dictated by the monitoring service. | | alertTargetIds | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
+| configurationItems | The list of affected resources of an alert. The configuration items can be different from the alert targets in some cases, e.g. in metric-for-log or log alerts defined on a Log Analytics workspace, where the configuration items are the actual resources sending the telemetry, and not the workspace. This field is used by ITSM systems to correlate alerts to resources in a CMDB. |
| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. | | firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). | | resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
Any alert instance describes the resource that was affected and the cause of the
## Alert context
-### Metric alerts (excluding availability tests)
+### Metric alerts - Static threshold
#### `monitoringService` = `Platform`
Any alert instance describes the resource that was affected and the cause of the
} ```
-### Metric alerts (availability tests)
+### Metric alerts - Dynamic threshold
+
+#### `monitoringService` = `Platform`
+
+**Sample values**
+```json
+{
+ "alertContext": {
+ "properties": null,
+ "conditionType": "DynamicThresholdCriteria",
+ "condition": {
+ "windowSize": "PT5M",
+ "allOf": [
+ {
+ "alertSensitivity": "High",
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": 1,
+ "minFailingPeriodsToAlert": 1
+ },
+ "ignoreDataBefore": null,
+ "metricName": "Egress",
+ "metricNamespace": "microsoft.storage/storageaccounts",
+ "operator": "GreaterThan",
+ "threshold": "47658",
+ "timeAggregation": "Total",
+ "dimensions": [],
+ "metricValue": 50101
+ }
+ ],
+ "windowStartTime": "2021-07-20T05:07:26.363Z",
+ "windowEndTime": "2021-07-20T05:12:26.363Z"
+ }
+ }
+}
+```
+
+### Metric alerts - Availability tests
#### `monitoringService` = `Platform`
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Ensure that you've met the following prerequisites:
1. Use the link https://(instance name).service-now.com/api/sn_em_connector/em/inbound_event?source=azuremonitor the URI for the secure export definition. 2. Follow the instructions according to the version:
+ * [Quebec](https://docs.servicenow.com/bundle/quebec-it-operations-management/page/product/event-management/concept/azure-integration.html)
* [Paris](https://docs.servicenow.com/bundle/paris-it-operations-management/page/product/event-management/concept/azure-integration.html) * [Orlando](https://docs.servicenow.com/bundle/orlando-it-operations-management/page/product/event-management/concept/azure-integration.html) * [New York](https://docs.servicenow.com/bundle/newyork-it-operations-management/page/product/event-management/concept/azure-integration.html)
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-enable-overview.md
VM insights supports the following machines:
- Azure virtual machine scale set - Hybrid virtual machine connected with Azure Arc
+> [!IMPORTANT]
+> If the ethernet device for your virtual machine has more than nine characters, then it wonΓÇÖt be recognized by VM insights and data wonΓÇÖt be sent to the InsightsMetrics table. The agent will collect data from [other sources](../agents/agent-data-sources.md).
## Supported Azure Arc machines VM insights is available for Azure Arc enabled servers in regions where the Arc extension service is available. You must be running version 0.9 or above of the Arc Agent.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
param builtInRoleType string {
'description': 'Built-in role to assign' } }
-param roleNameGuid string {
- default: newGuid()
- metadata: {
- 'description': 'A new GUID used to identify the role assignment'
- }
-}
var roleDefinitionId = { Owner: {
var roleDefinitionId = {
} resource myRoleAssignment 'Microsoft.Authorization/roleAssignments@2018-09-01-preview' = {
- name: roleNameGuid
+ name: guid(resourceGroup().id, principalId, roleDefinitionId[builtInRoleType].id)
properties: { roleDefinitionId: roleDefinitionId[builtInRoleType].id principalId: principalId
azure-resource-manager Loop Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/loop-resources.md
description: Use loops and arrays in a Bicep file to deploy multiple instances o
Previously updated : 06/01/2021 Last updated : 07/19/2021 # Resource iteration in Bicep
resource storageAcct 'Microsoft.Storage/storageAccounts@2021-02-01' = [for name
If you want to return values from the deployed resources, you can use a loop in the [output section](loop-outputs.md).
+## Resource iteration with condition
+
+The following example shows a nested loop combined with a filtered resource loop. Filters must be expressions that evaluate to a boolean value.
+
+```bicep
+resource parentResources 'Microsoft.Example/examples@2020-06-06' = [for parent in parents: if(parent.enabled) {
+ name: parent.name
+ properties: {
+ children: [for child in parent.children: {
+ name: child.name
+ setting: child.settingValue
+ }]
+ }
+}]
+```
+
+Filters are also supported with module loops.
+ ## Deploy in batches By default, Resource Manager creates resources in parallel. When you use a loop to create multiple instances of a resource type, those instances are all deployed at the same time. The order in which they're created isn't guaranteed. There's no limit to the number of resources deployed in parallel, other than the total limit of 800 resources in the Bicep file.
azure-resource-manager Tutorial Custom Providers Function Authoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
You also need to create a new class to model your custom resource. In this tutor
```csharp // Custom Resource Table Entity
-public class CustomResource : TableEntity
+public class CustomResource : ITableEntity
{ public string Data { get; set; }+
+ public string PartitionKey { get; set; }
+
+ public string RowKey { get; set; }
+
+ public DateTimeOffset? Timestamp { get; set; }
+
+ public ETag ETag { get; set; }
} ```
-**CustomResource** is a simple, generic class that accepts any input data. It's based on **TableEntity**, which is used to store data. The **CustomResource** class inherits two properties from **TableEntity**: **partitionKey** and **rowKey**.
+**CustomResource** is a simple, generic class that accepts any input data. It's based on **ITableEntity**, which is used to store data. The **CustomResource** class implements all properties from interface **ITableEntity**: **timestamp**, **eTag**, **partitionKey**, and **rowKey**.
## Support custom provider RESTful methods
Add the following **CreateCustomResource** method to create new resources:
/// Creates a custom resource and saves it to table storage. /// </summary> /// <param name="requestMessage">The HTTP request message.</param>
-/// <param name="tableStorage">The Azure Table storage account.</param>
+/// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
/// <param name="azureResourceId">The parsed Azure resource ID.</param> /// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param> /// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the created custom resource.</returns>
-public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMessage requestMessage, CloudTable tableStorage, ResourceId azureResourceId, string partitionKey, string rowKey)
+public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMessage requestMessage, TableClient tableClient, ResourceId azureResourceId, string partitionKey, string rowKey)
{ // Adds the Azure top-level properties. var myCustomResource = JObject.Parse(await requestMessage.Content.ReadAsStringAsync());
public static async Task<HttpResponseMessage> CreateCustomResource(HttpRequestMe
myCustomResource["id"] = azureResourceId.Id; // Save the resource into storage.
- var insertOperation = TableOperation.InsertOrReplace(
- new CustomResource
- {
- PartitionKey = partitionKey,
- RowKey = rowKey,
- Data = myCustomResource.ToString(),
- });
- await tableStorage.ExecuteAsync(insertOperation);
+ var customEntity = new CustomResource
+ {
+ PartitionKey = partitionKey,
+ RowKey = rowKey,
+ Data = myCustomResource.ToString(),
+ });
+ await tableClient.AddEntity(customEntity);
var createResponse = requestMessage.CreateResponse(HttpStatusCode.OK); createResponse.Content = new StringContent(myCustomResource.ToString(), System.Text.Encoding.UTF8, "application/json");
Add the following **RetrieveCustomResource** method to retrieve existing resourc
/// Retrieves a custom resource. /// </summary> /// <param name="requestMessage">The HTTP request message.</param>
-/// <param name="tableStorage">The Azure Table storage account.</param>
+/// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param> /// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the existing custom resource.</returns>
-public static async Task<HttpResponseMessage> RetrieveCustomResource(HttpRequestMessage requestMessage, CloudTable tableStorage, string partitionKey, string rowKey)
+public static async Task<HttpResponseMessage> RetrieveCustomResource(HttpRequestMessage requestMessage, TableClient tableClient, string partitionKey, string rowKey)
{ // Attempt to retrieve the Existing Stored Value
- var tableQuery = TableOperation.Retrieve<CustomResource>(partitionKey, rowKey);
- var existingCustomResource = (CustomResource)(await tableStorage.ExecuteAsync(tableQuery)).Result;
+ var queryResult = tableClient.GetEntityAsync<CustomResource>(partitionKey, rowKey);
+ var existingCustomResource = (CustomResource)queryResult.Result;
var retrieveResponse = requestMessage.CreateResponse( existingCustomResource != null ? HttpStatusCode.OK : HttpStatusCode.NotFound);
Add the following **RemoveCustomResource** method to remove existing resources:
/// Removes an existing custom resource. /// </summary> /// <param name="requestMessage">The HTTP request message.</param>
-/// <param name="tableStorage">The Azure storage account table.</param>
+/// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param> /// <param name="rowKey">The row key for storage. This is '{resourceType}:{customResourceName}'.</param> /// <returns>The HTTP response containing the result of the deletion.</returns>
-public static async Task<HttpResponseMessage> RemoveCustomResource(HttpRequestMessage requestMessage, CloudTable tableStorage, string partitionKey, string rowKey)
+public static async Task<HttpResponseMessage> RemoveCustomResource(HttpRequestMessage requestMessage, TableClient tableClient, string partitionKey, string rowKey)
{ // Attempt to retrieve the Existing Stored Value
- var tableQuery = TableOperation.Retrieve<CustomResource>(partitionKey, rowKey);
- var existingCustomResource = (CustomResource)(await tableStorage.ExecuteAsync(tableQuery)).Result;
+ var queryResult = tableClient.GetEntityAsync<CustomResource>(partitionKey, rowKey);
+ var existingCustomResource = (CustomResource)queryResult.Result;
if (existingCustomResource != null) {
- var deleteOperation = TableOperation.Delete(existingCustomResource);
- await tableStorage.ExecuteAsync(deleteOperation);
+ await tableClient.DeleteEntity(deleteEntity.PartitionKey, deleteEntity.RowKey);
} return requestMessage.CreateResponse(
Add the following **EnumerateAllCustomResources** method to enumerate the existi
/// Enumerates all the stored custom resources for a given type. /// </summary> /// <param name="requestMessage">The HTTP request message.</param>
-/// <param name="tableStorage">The Azure Table storage account.</param>
+/// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
/// <param name="partitionKey">The partition key for storage. This is the custom provider ID.</param> /// <param name="resourceType">The resource type of the enumeration.</param> /// <returns>The HTTP response containing a list of resources stored under 'value'.</returns>
-public static async Task<HttpResponseMessage> EnumerateAllCustomResources(HttpRequestMessage requestMessage, CloudTable tableStorage, string partitionKey, string resourceType)
+public static async Task<HttpResponseMessage> EnumerateAllCustomResources(HttpRequestMessage requestMessage, TableClient tableClient, string partitionKey, string resourceType)
{ // Generate upper bound of the query. var rowKeyUpperBound = new StringBuilder(resourceType); rowKeyUpperBound[rowKeyUpperBound.Length - 1]++; // Create the enumeration query.
- var enumerationQuery = new TableQuery<CustomResource>().Where(
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, partitionKey),
- TableOperators.And,
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan, resourceType),
- TableOperators.And,
- TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThan, rowKeyUpperBound.ToString()))));
+ var queryResultsFilter = tableClient.Query<CustomResource>(filter: $"PartitionKey eq '{partitionKey}' and RowKey lt '{rowKeyUpperBound.ToString()}' and RowKey ge '{resourceType}'")
- var customResources = (await tableStorage.ExecuteQuerySegmentedAsync(enumerationQuery, null))
- .ToList().Select(customResource => JToken.Parse(customResource.Data));
+ var customResources = await queryResultsFilter.ToList().Select(customResource => JToken.Parse(customResource.Data));
var enumerationResponse = requestMessage.CreateResponse(HttpStatusCode.OK); enumerationResponse.Content = new StringContent(new JObject(new JProperty("value", customResources)).ToString(), System.Text.Encoding.UTF8, "application/json");
After all the RESTful methods are added to the function app, update the main **R
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="log">The logger.</param>
-/// <param name="tableStorage">The Azure Table storage account.</param>
+/// <param name="tableClient">The client that allows you to interact with Azure Tables hosted in either Azure storage accounts or Azure Cosmos DB table API.</param>
/// <returns>The HTTP response for the custom Azure API.</returns>
-public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogger log, CloudTable tableStorage)
+public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogger log, TableClient tableClient)
{ // Get the unique Azure request path from request headers. var requestPath = req.Headers.GetValues("x-ms-customproviders-requestpath").FirstOrDefault();
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
case HttpMethod m when m == HttpMethod.Get && !isResourceRequest: return await EnumerateAllCustomResources( requestMessage: req,
- tableStorage: tableStorage,
+ tableClient: tableClient,
partitionKey: partitionKey, resourceType: rowKey);
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
case HttpMethod m when m == HttpMethod.Get && isResourceRequest: return await RetrieveCustomResource( requestMessage: req,
- tableStorage: tableStorage,
+ tableClient: tableClient,
partitionKey: partitionKey, rowKey: rowKey);
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
case HttpMethod m when m == HttpMethod.Put && isResourceRequest: return await CreateCustomResource( requestMessage: req,
- tableStorage: tableStorage,
+ tableClient: tableClient,
azureResourceId: azureResourceId, partitionKey: partitionKey, rowKey: rowKey);
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
case HttpMethod m when m == HttpMethod.Delete && isResourceRequest: return await RemoveCustomResource( requestMessage: req,
- tableStorage: tableStorage,
+ tableClient: tableClient,
partitionKey: partitionKey, rowKey: rowKey);
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
} ```
-The updated **Run** method now includes the *tableStorage* input binding that you added for Azure Table storage. The first part of the method reads the `x-ms-customproviders-requestpath` header and uses the `Microsoft.Azure.Management.ResourceManager.Fluent` library to parse the value as a resource ID. The `x-ms-customproviders-requestpath` header is sent by the custom provider and specifies the path of the incoming request.
+The updated **Run** method now includes the *tableClient* input binding that you added for Azure Table storage. The first part of the method reads the `x-ms-customproviders-requestpath` header and uses the `Microsoft.Azure.Management.ResourceManager.Fluent` library to parse the value as a resource ID. The `x-ms-customproviders-requestpath` header is sent by the custom provider and specifies the path of the incoming request.
By using the parsed resource ID, you can generate the **partitionKey** and **rowKey** values for the data to look up or to store custom resources.
using System.Globalization;
using System.Collections.Generic; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Host;
-using Microsoft.WindowsAzure.Storage.Table;
+using Azure.Data.Table;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core; using Newtonsoft.Json; using Newtonsoft.Json.Linq;
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 04/20/2021 Last updated : 07/20/2021 # Tag support for Azure resources
Jump to a resource provider namespace:
> | servers / restorableDroppedDatabases | No | No | > | servers / serviceobjectives | No | No | > | servers / tdeCertificates | No | No |
-> | virtualClusters | Yes | Yes |
+> | virtualClusters | No | No |
<a id="sqlnote"></a>
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/release-notes.md
SQL engine build 15.0.2000.1559
## Azure SQL Edge 1.0.3
-SQL engine build 15.0.2000.1554
+SQL engine build 15.0.2000.1557
### Fixes
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
--- Previously updated : 03/10/2021+++ Last updated : 07/20/2021
-# Automated backups - Azure SQL Database & SQL Managed Instance
+# Automated backups - Azure SQL Database & Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
When you restore a database, the service determines which full, differential, an
### Backup storage redundancy
-By default, SQL Database and SQL Managed Instance store data in geo-redundant [storage blobs](../../storage/common/storage-redundancy.md) that are replicated to a [paired region](../../best-practices-availability-paired-regions.md). This helps to protect against outages impacting backup storage in the primary region and allow you to restore your server to a different region in the event of a disaster.
+By default, SQL Database and SQL Managed Instance store data in geo-redundant [storage blobs](../../storage/common/storage-redundancy.md) that are replicated to a [paired region](../../best-practices-availability-paired-regions.md). Geo-redundancy helps to protect against outages impacting backup storage in the primary region and allows you to restore your server to a different region in the event of a disaster.
-The option to configure backup storage redundancy provides the flexibility to choose between locally-redundant, zone-redundant, or geo-redundant storage blobs for a SQL Managed Instance or a SQL Database. To ensure that your data stays within the same region where your managed instance or SQL database is deployed, you can change the default geo-redundant backup storage redundancy and configure either locally-redundant or zone-redundant storage blobs for backups. Storage redundancy mechanisms store multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failure, network or power outages, or massive natural disasters. The configured backup storage redundancy is applied to both short-term backup retention settings that are used for point in time restore (PITR) and long-term retention backups used for long-term backups (LTR).
+The option to configure backup storage redundancy provides the flexibility to choose between locally redundant, zone-redundant, or geo-redundant storage blobs. To ensure that your data stays within the same region where your managed instance or SQL database is deployed, you can change the default geo-redundant backup storage redundancy and configure either locally redundant or zone-redundant storage blobs for backups. Storage redundancy mechanisms store multiple copies of your data so that it is protected from planned and unplanned events, including transient hardware failure, network or power outages, or massive natural disasters. The configured backup storage redundancy is applied to both short-term backup retention settings that are used for point in time restore (PITR) and long-term retention backups used for long-term backups (LTR).
-For a SQL Database the backup storage redundancy can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only. After the backup storage redundancy of an existing database is updated, it may take up to 48 hours for the changes to be applied. Note that, geo restore is disabled as soon as a database is updated to use local or zone redundant storage.
+For SQL Database, the backup storage redundancy can be configured at the time of database creation or can be updated for an existing database; the changes made to an existing database apply to future backups only. After the backup storage redundancy of an existing database is updated, it may take up to 48 hours for the changes to be applied. Geo-restore is disabled as soon as a database is updated to use local or zone redundant storage.
> [!IMPORTANT]
You can use these backups to:
> Geo-restore is available only for SQL databases or managed instances configured with geo-redundant backup storage. - **Restore from long-term backup** - [Restore a database from a specific long-term backup](long-term-retention-overview.md) of a single database or pooled database, if the database has been configured with a long-term retention policy (LTR). LTR allows you to restore an old version of the database by using [the Azure portal](long-term-backup-retention-configure.md#using-the-azure-portal) or [Azure PowerShell](long-term-backup-retention-configure.md#using-powershell) to satisfy a compliance request or to run an old version of the application. For more information, see [Long-term retention](long-term-retention-overview.md).
-To perform a restore, see [Restore database from backups](recovery-using-backups.md).
- > [!NOTE] > In Azure Storage, the term *replication* refers to copying blobs from one location to another. In SQL, *database replication* refers to various technologies used to keep multiple secondary databases synchronized with a primary database.
-You can try backup configuration and restore operations using the following examples:
+### <a id="restore-capabilities"></a>Restore capabilities and features of Azure SQL Database and Azure SQL Managed Instance
+
+This table summarizes the capabilities and features of [point in time restore (PITR)](recovery-using-backups.md#point-in-time-restore), [geo-restore](recovery-using-backups.md#geo-restore), and [long-term retention backups](long-term-retention-overview.md).
+
+| **Backup Properties** | Point in time recovery (PITR) | Geo-restore | Long-term backup restore |           
+|-|--|--|--|
+| **Types of SQL backup** | Full, Differential, Log | Replicated copies of PITR backups | Only the full backups | 
+| **Recovery Point Objective (RPO)** |  5-10 minutes, based on compute size and amount of database activity. | Up to 1 hour, based on geo-replication.\*  |  One week (or user's policy).|
+| **Recovery Time Objective (RTO)** | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). | Restore usually takes <12 hours, but could take longer dependent on size and activity. See [Recovery](recovery-using-backups.md#recovery-time). |
+| **Retention** | 7 days by default, Up to 35 days |  Enabled by default, same as source.\*\* | Not enabled by default, Retention Up to 10 years. |     
+| **Azure storage**  | Geo-redundant by default. Can optionally configure zone or locally redundant storage. | Available when PITR backup storage redundancy is set to Geo-redundant. Not available when PITR backup store is zone or locally redundant storage. | Geo-redundant by default. Can configure zone or locally redundant storage. | 
+| **Use to create new database in same region** | Supported | Supported | Supported |
+| **Use to create new database in another region** | Not Supported | Supported in any Azure region | Supported in any Azure region |
+| **Use to create new database in another Subscription** | Not Supported | Not Supported\*\*\* | Not Supported\*\*\* |
+| **Restore via Azure portal**|Yes|Yes|Yes|
+| **Restore via PowerShell** |Yes|Yes|Yes|
+| **Restore via Azure CLI** |Yes|Yes|Yes|
+| | | | |
+
+\* For business-critical applications that require large databases and must ensure business continuity, use [Auto-failover groups](auto-failover-group-overview.md).
+
+\*\* All PITR backups are stored on geo-redundant storage by default. Hence, geo-restore is enabled by default.
+
+\*\*\* Workaround is to restore to a new server and use Resource Move to move the server to another Subscription.
+
+### Restoring a database from backups
+
+To perform a restore, see [Restore database from backups](recovery-using-backups.md). You can try backup configuration and restore operations using the following examples:
| Operation | Azure portal | Azure PowerShell | ||||
The first full backup is scheduled immediately after a new database is created o
## Backup storage consumption
-With SQL Server backup and restore technology, restoring a database to a point in time requires an uninterrupted backup chain consisting of one full backup, optionally one differential backup, and one or more transaction log backups. SQL Database and SQL Managed Instance backup schedule includes one full backup every week. Therefore, to enable PITR within the entire retention period, the system must store additional full, differential, and transaction log backups for up to a week longer than the configured retention period.
+With SQL Server backup and restore technology, restoring a database to a point in time requires an uninterrupted backup chain consisting of one full backup, optionally one differential backup, and one or more transaction log backups. SQL Database and SQL Managed Instance backup schedule includes one full backup every week. Therefore, to provide PITR within the entire retention period, the system must store additional full, differential, and transaction log backups for up to a week longer than the configured retention period.
In other words, for any point in time during the retention period, there must be a full backup that is older than the oldest time of the retention period, as well as an uninterrupted chain of differential and transaction log backups from that full backup until the next full backup. > [!NOTE]
-> To enable PITR, additional backups are stored for up to a week longer than the configured retention period. Backup storage is charged at the same rate for all backups.
+> To provide PITR, additional backups are stored for up to a week longer than the configured retention period. Backup storage is charged at the same rate for all backups.
Backups that are no longer needed to provide PITR functionality are automatically deleted. Because differential backups and log backups require an earlier full backup to be restorable, all three backup types are purged together in weekly sets.
For all databases including [TDE encrypted](transparent-data-encryption-tde-over
SQL Database and SQL Managed Instance compute your total used backup storage as a cumulative value. Every hour, this value is reported to the Azure billing pipeline, which is responsible for aggregating this hourly usage to calculate your consumption at the end of each month. After the database is deleted, consumption decreases as backups age out and are deleted. Once all backups are deleted and PITR is no longer possible, billing stops. > [!IMPORTANT]
-> Backups of a database are retained to enable PITR even if the database has been deleted. While deleting and re-creating a database may save storage and compute costs, it may increase backup storage costs, because the service retains backups for each deleted database, every time it is deleted.
+> Backups of a database are retained to provide PITR even if the database has been deleted. While deleting and re-creating a database may save storage and compute costs, it may increase backup storage costs, because the service retains backups for each deleted database, every time it is deleted.
### Monitor consumption
-For vCore databases, the storage consumed by each type of backup (full, differential, and log) is reported on the database monitoring blade as a separate metric. The following diagram shows how to monitor the backup storage consumption for a single database. This feature is currently not available for managed instances.
+For vCore databases, the storage consumed by each type of backup (full, differential, and log) is reported on the database monitoring pane as a separate metric. The following diagram shows how to monitor the backup storage consumption for a single database. This feature is currently not available for managed instances.
![Monitor database backup consumption in the Azure portal](./media/automated-backups-overview/backup-metrics.png)
Backup storage consumption up to the maximum data size for a database is not cha
- For large data load operations, consider using [clustered columnstore indexes](/sql/relational-databases/indexes/columnstore-indexes-overview) and following related [best practices](/sql/relational-databases/indexes/columnstore-indexes-data-loading-guidance), and/or reduce the number of non-clustered indexes. - In the General Purpose service tier, the provisioned data storage is less expensive than the price of the backup storage. If you have continually high excess backup storage costs, you might consider increasing data storage to save on the backup storage. - Use TempDB instead of permanent tables in your application logic for storing temporary results and/or transient data.-- Use locally-redundant backup storage whenever possible (for example dev/test environments)
+- Use locally redundant backup storage whenever possible (for example dev/test environments)
## Backup retention
-For all new, restored, and copied databases, Azure SQL Database and Azure SQL Managed Instance retain sufficient backups to allow PITR within the last 7 days by default. With the exception of Hyperscale and Basic tier databases, you can [change backup retention period](#change-the-pitr-backup-retention-period) per each active database in the 1-35 day range. As described in [Backup storage consumption](#backup-storage-consumption), backups stored to enable PITR may be older than the retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once a database has been deleted in the 0-35 days range.
+For all new, restored, and copied databases, Azure SQL Database and Azure SQL Managed Instance retain sufficient backups to allow PITR within the last seven days by default. With the exception of Hyperscale and Basic tier databases, you can [change backup retention period](#change-the-pitr-backup-retention-period) per each active database in the 1-35 day range. As described in [Backup storage consumption](#backup-storage-consumption), backups stored to enable PITR may be older than the retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once a database has been deleted in the 0-35 days range.
If you delete a database, the system keeps backups in the same way it would for an online database with its specific retention period. You cannot change backup retention period for a deleted database.
For more information about LTR, see [Long-term backup retention](long-term-reten
## Backup storage costs
-The price for backup storage varies and depends on your purchasing model (DTU or vCore), chosen backup storage redundancy option, and also on your region. The backup storage is charged per GB/month consumed, for pricing see [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database/single/) page and [Azure SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/) page.
+The price for backup storage varies and depends on your purchasing model (DTU or vCore), chosen backup storage redundancy option, and also on your region. The backup storage is charged per GB/month consumed, for pricing see [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/sql-database/single/) page and [Azure SQL Managed Instance pricing](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/) page.
+
+For more on purchasing models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
> [!NOTE] > Azure invoice will show only the excess backup storage consumed, not the entire backup storage consumption. For example, in a hypothetical scenario, if you have provisioned 4TB of data storage, you will get 4TB of free backup storage space. In case that you have used the total of 5.8TB of backup storage space, Azure invoice will show only 1.8TB, as only excess backup storage used is charged.
For managed instances, the total billable backup storage size is aggregated at t
`Total billable backup storage size = (total size of full backups + total size of differential backups + total size of log backups) ΓÇô maximum instance data storage`
-Total billable backup storage, if any, will be charged in GB/month as per the rate of the backup storage redundancy used. This backup storage consumption will depend on the workload and size of individual databases, elastic pools, and managed instances. Heavily modified databases have larger differential and log backups, because the size of these backups is proportional to the amount of data changes. Therefore, such databases will have higher backup charges.
+Total billable backup storage, if any, will be charged in GB/month as per the rate of the backup storage redundancy used. This backup storage consumption will depend on the workload and size of individual databases, elastic pools, and managed instances. Heavily modified databases have larger differential and log backups, because the size of these backups is proportional to the amount of changed data. Therefore, such databases will have higher backup charges.
SQL Database and SQL Managed Instance computes your total billable backup storage as a cumulative value across all backup files. Every hour, this value is reported to the Azure billing pipeline, which aggregates this hourly usage to get your backup storage consumption at the end of each month. If a database is deleted, backup storage consumption will gradually decrease as older backups age out and are deleted. Because differential backups and log backups require an earlier full backup to be restorable, all three backup types are purged together in weekly sets. Once all backups are deleted, billing stops. As a simplified example, assume a database has accumulated 744 GB of backup storage and that this amount stays constant throughout an entire month because the database is completely idle. To convert this cumulative storage consumption to hourly usage, divide it by 744.0 (31 days per month * 24 hours per day). SQL Database will report to Azure billing pipeline that the database consumed 1 GB of PITR backup each hour, at a constant rate. Azure billing will aggregate this consumption and show a usage of 744 GB for the entire month. The cost will be based on the amount/GB/month rate in your region.
-Now, a more complex example. Suppose the same idle database has its retention increased from 7 days to 14 days in the middle of the month. This increase results in the total backup storage doubling to 1,488 GB. SQL Database would report 1 GB of usage for hours 1 through 372 (the first half of the month). It would report the usage as 2 GB for hours 373 through 744 (the second half of the month). This usage would be aggregated to a final bill of 1,116 GB/month.
+Now, a more complex example. Suppose the same idle database has its retention increased from seven days to 14 days in the middle of the month. This increase results in the total backup storage doubling to 1,488 GB. SQL Database would report 1 GB of usage for hours 1 through 372 (the first half of the month). It would report the usage as 2 GB for hours 373 through 744 (the second half of the month). This usage would be aggregated to a final bill of 1,116 GB/month.
Actual backup billing scenarios are more complex. Because the rate of changes in the database depends on the workload and is variable over time, the size of each differential and log backup will vary as well, causing the hourly backup storage consumption to fluctuate accordingly. Furthermore, each differential backup contains all changes made in the database since the last full backup, thus the total size of all differential backups gradually increases over the course of a week, and then drops sharply once an older set of full, differential, and log backups ages out. For example, if a heavy write activity such as index rebuild has been run just after a full backup completed, then the modifications made by the index rebuild will be included in the transaction log backups taken over the duration of rebuild, in the next differential backup, and in every differential backup taken until the next full backup occurs. For the latter scenario in larger databases, an optimization in the service creates a full backup instead of a differential backup if a differential backup would be excessively large otherwise. This reduces the size of all differential backups until the following full backup.
You can monitor total backup storage consumption for each backup type (full, dif
### Backup storage redundancy Backup storage redundancy impacts backup costs in the following way:-- locally-redundant price = x
+- locally redundant price = x
- zone-redundant price = 1.25x - geo-redundant price = 2x
If your database is encrypted with TDE, backups are automatically encrypted at r
## Backup integrity
-On an ongoing basis, the Azure SQL engineering team automatically tests the restore of automated database backups. (This testing is not currently available in SQL Managed Instance.) Upon point-in-time restore, databases also receive DBCC CHECKDB integrity checks.
+On an ongoing basis, the Azure SQL engineering team automatically tests the restore of automated database backups. (This testing is not currently available in SQL Managed Instance. You should schedule DBCC CHECKDB on your databases in SQL Managed Instance, scheduled around on your workload.)
+
+Upon point-in-time restore, databases also receive DBCC CHECKDB integrity checks.
Any issues found during the integrity check will result in an alert to the engineering team. For more information, see [Data Integrity in SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/).
You can change the default PITR backup retention period by using the Azure porta
To change the PITR backup retention period for active databases by using the Azure portal, go to the server or managed instance with the databases whose retention period you want to change. Select **Backups** in the left pane, then select the **Retention policies** tab. Select the database(s) for which you want to change the PITR backup retention. Then select **Configure retention** from the action bar. -- #### [SQL Database](#tab/single-database) ![Change PITR retention, server level](./media/automated-backups-overview/configure-backup-retention-sqldb.png)
Set-AzSqlDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup
#### [SQL Managed Instance](#tab/managed-instance)
-To change the PITR backup retention for an **individual active** SQL Managed Instance databases, use the following PowerShell example.
+To change the PITR backup retention for an **single active** database in a SQL Managed Instance, use the following PowerShell example.
```powershell # SET new PITR backup retention period on an active individual database
To change the PITR backup retention for an **individual active** SQL Managed Ins
Set-AzSqlInstanceDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup -InstanceName testserver -DatabaseName testDatabase -RetentionDays 1 ```
-To change the PITR backup retention for **all active** SQL Managed Instance databases, use the following PowerShell example.
+To change the PITR backup retention for **all active** databases in a SQL Managed Instance, use the following PowerShell example.
```powershell # SET new PITR backup retention period for ALL active databases
To change the PITR backup retention for **all active** SQL Managed Instance data
Get-AzSqlInstanceDatabase -ResourceGroupName resourceGroup -InstanceName testserver | Set-AzSqlInstanceDatabaseBackupShortTermRetentionPolicy -RetentionDays 1 ```
-To change the PITR backup retention for an **individual deleted** SQL Managed Instance database, use the following PowerShell example.
+To change the PITR backup retention for an **single deleted** database in a SQL Managed Instance, use the following PowerShell example.
```powershell # SET new PITR backup retention on an individual deleted database
To change the PITR backup retention for an **individual deleted** SQL Managed In
Get-AzSqlDeletedInstanceDatabaseBackup -ResourceGroupName resourceGroup -InstanceName testserver -DatabaseName testDatabase | Set-AzSqlInstanceDatabaseBackupShortTermRetentionPolicy -RetentionDays 0 ```
-To change the PITR backup retention for **all deleted** SQL Managed Instance databases, use the following PowerShell example.
+To change the PITR backup retention for **all deleted** databases in a SQL Managed Instance, use the following PowerShell example.
```powershell # SET new PITR backup retention for ALL deleted databases
For more information, see [Backup Retention REST API](/rest/api/sql/backupshortt
> [!NOTE] > Configurable storage redundancy for backups for SQL Managed Instance can only be specified during the create managed instance process. Once the resource is provisioned, you can't change the backup storage redundancy option. For SQL Database, public preview of this feature is currently available in all Azure regions and it is generally available in Southeast Asia Azure region.
-A backup storage redundancy of a managed instance can be set during instance creation only. For a SQL Database it can be set when creating the database or can be updated for an existing database. The default value is geo-redundant storage. For differences in pricing between locally-redundant, zone-redundant and geo-redundant backup storage visit [managed instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
+A backup storage redundancy of a managed instance can be set during instance creation only. For a SQL Database it can be set when creating the database or can be updated for an existing database. The default value is geo-redundant storage. For differences in pricing between locally redundant, zone-redundant and geo-redundant backup storage visit [managed instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
### Configure backup storage redundancy by using the Azure portal #### [SQL Database](#tab/single-database)
-In Azure portal, you can configure the backup storage redundancy on the **Create SQL Database** blade. The option is available under the Backup Storage Redundancy section.
-![Open Create SQL Database blade](./media/automated-backups-overview/sql-database-backup-storage-redundancy.png)
+In Azure portal, you can configure the backup storage redundancy on the **Create SQL Database** pane. The option is available under the Backup Storage Redundancy section.
+![Open Create SQL Database pane](./media/automated-backups-overview/sql-database-backup-storage-redundancy.png)
#### [SQL Managed Instance](#tab/managed-instance)
-In the Azure portal, the option to change backup storage redundancy is located on the **Compute + storage** blade accessible from the **Configure Managed Instance** option on the **Basics** tab when you are creating your SQL Managed Instance.
-![Open Compute+Storage configuration-blade](./media/automated-backups-overview/open-configuration-blade-managedinstance.png)
+In the Azure portal, the option to change backup storage redundancy is located on the **Compute + storage** pane accessible from the **Configure Managed Instance** option on the **Basics** tab when you are creating your SQL Managed Instance.
+![Open Compute+Storage configuration-pane](./media/automated-backups-overview/open-configuration-blade-managedinstance.png)
-Find the option to select backup storage redundancy on the **Compute + storage** blade.
+Find the option to select backup storage redundancy on the **Compute + storage** pane.
![Configure backup storage redundancy](./media/automated-backups-overview/select-backup-storage-redundancy-managedinstance.png)
Find the option to select backup storage redundancy on the **Compute + storage**
#### [SQL Database](#tab/single-database)
-To configure backup storage redundancy when creating a new database you can specify the -BackupStoageRedundancy parameter. Possible values are Geo, Zone and Local. By default, all SQL Databases use geo-redundant storage for backups. Geo Restore is disabled if a database is created with local or zone redundant backup storage.
+To configure backup storage redundancy when creating a new database, you can specify the -BackupStorageRedundancy parameter. Possible values are Geo, Zone, and Local. By default, all SQL Databases use geo-redundant storage for backups. Geo-restore is disabled if a database is created with local or zone redundant backup storage.
```powershell # Create a new database with geo-redundant backup storage.
New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -D
For details visit [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase).
-To update backup storage redundancy of an existing database, you can use the -BackupStorageRedundancy parameter. Possible values are Geo, Zone and Local.
-Note that, it may take up to 48 hours for the changes to be applied on the database. Switching from geo-redundant backup storage to local or zone redundant storage disables geo restore.
+To update backup storage redundancy of an existing database, you can use the -BackupStorageRedundancy parameter. Possible values are Geo, Zone, and Local.
+It may take up to 48 hours for the changes to be applied on the database. Switching from geo-redundant backup storage to local or zone redundant storage disables geo-restore.
```powershell # Change the backup storage redundancy for Database01 to zone-redundant.
For details visit [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabas
#### [SQL Managed Instance](#tab/managed-instance)
-For configuring backup storage redundancy during managed instance creation you can specify -BackupStoageRedundancy parameter. Possible values are Geo, Zone and Local.
+For configuring backup storage redundancy during managed instance creation, you can specify -BackupStorageRedundancy parameter. Possible values are Geo, Zone, and Local.
```powershell New-AzSqlInstance -Name managedInstance2 -ResourceGroupName ResourceGroup01 -Location westcentralus -AdministratorCredential (Get-Credential) -SubnetId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/resourcegroup01/providers/Microsoft.Network/virtualNetworks/vnet_name/subnets/subnet_name" -LicenseType LicenseIncluded -StorageSizeInGB 1024 -VCore 16 -Edition "GeneralPurpose" -ComputeGeneration Gen4 -BackupStorageRedundancy Geo ```
-For more details visit [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance).
+For more information, see [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance).
## Use Azure Policy to enforce backup storage redundancy
-If you have data residency requirements that require you to keep all your data in a single Azure region, you may want to enforce zone-redundant or locally-redundant backups for your SQL Database or Managed Instance using Azure Policy.
+If you have data residency requirements that require you to keep all your data in a single Azure region, you may want to enforce zone-redundant or locally redundant backups for your SQL Database or Managed Instance using Azure Policy.
Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep these resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../../governance/policy/overview.md). ### Built-in backup storage redundancy policies
Following new built-in policies are added, which can be assigned at the subscrip
[SQL Database should avoid using GRS backup redundancy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb219b9cf-f672-4f96-9ab0-f5a3ac5e1c13) + [SQL Managed Instances should avoid using GRS backup redundancy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa9934fd7-29f2-4e6d-ab3d-607ea38e9079) A full list of built-in policy definitions for SQL Database and Managed Instance can be found [here](./policy-reference.md).
-To enforce data residency requirements at an organizational level, these policies can be assigned to a subscription. After these are assigned at a subscription level, users in the given subscription will not be able to create a database or a managed instance with geo-redundant backup storage via Azure portal or Azure PowerShell.
+To enforce data residency requirements at an organizational level, these policies can be assigned to a subscription. After these policies are assigned at a subscription level, users in the given subscription will not be able to create a database or a managed instance with geo-redundant backup storage via Azure portal or Azure PowerShell.
> [!IMPORTANT] > Azure policies are not enforced when creating a database via T-SQL. To enforce data residency when creating a database using T-SQL, [use 'LOCAL' or 'ZONE' as input to BACKUP_STORAGE_REDUNDANCY paramater in CREATE DATABASE statement](/sql/t-sql/statements/create-database-transact-sql#create-database-using-zone-redundancy-for-backups). Learn how to assign policies using the [Azure portal](../../governance/policy/assign-policy-portal.md) or [Azure PowerShell](../../governance/policy/assign-policy-powershell.md) - ## Next steps - Database backups are an essential part of any business continuity and disaster recovery strategy because they protect your data from accidental corruption or deletion. To learn about the other SQL Database business continuity solutions, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md).
+- For information about how to configure, manage, and restore from long-term retention of automated backups in Azure Blob storage by using the Azure portal, see [Manage long-term backup retention by using the Azure portal](long-term-backup-retention-configure.md).
+- For information about how to configure, manage, and restore from long-term retention of automated backups in Azure Blob storage by using PowerShell, see [Manage long-term backup retention by using PowerShell](long-term-backup-retention-configure.md#using-powershell).
- Get more information about how to [restore a database to a point in time by using the Azure portal](recovery-using-backups.md). - Get more information about how to [restore a database to a point in time by using PowerShell](scripts/restore-database-powershell.md).-- For information about how to configure, manage, and restore from long-term retention of automated backups in Azure Blob storage by using the Azure portal, see [Manage long-term backup retention by using the Azure portal](long-term-backup-retention-configure.md).-- For information about how to configure, manage, and restore from long-term retention of automated backups in Azure Blob storage by using PowerShell, see [Manage long-term backup retention by using PowerShell](long-term-backup-retention-configure.md). - To learn all about backup storage consumption on Azure SQL Managed Instance, see [Backup storage consumption on Managed Instance explained](https://aka.ms/mi-backup-explained). - To learn how to fine-tune backup storage retention and costs for Azure SQL Managed Instance, see [Fine tuning backup storage costs on Managed Instance](https://aka.ms/mi-backup-tuning).
azure-sql Business Continuity High Availability Disaster Recover Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview.md
Last updated 06/25/2019
-# Overview of business continuity with Azure SQL Database
+# Overview of business continuity with Azure SQL Database & Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)] **Business continuity** in Azure SQL Database and SQL Managed Instance refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In the most of the cases, SQL Database and SQL Managed Instance will handle the disruptive events that might happen in the cloud environment and keep your applications and business processes running. However, there are some disruptive events that cannot be handled by SQL Database automatically such as:
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
You will not be impacted if you have:
## What to do you do if you're affected
-We recommend that you allow outbound traffic to IP addresses for all the [gateway IP addresses](connectivity-architecture.md#gateway-ip-addresses) in the region on TCP port 1433, and port range 11000-11999. This recommendation is applicable to clients connecting from on-premises and also those connecting via Service Endpoints. For more information on port ranges, see [Connection policy](connectivity-architecture.md#connection-policy).
+We recommend that you allow outbound traffic to IP addresses for all the [gateway IP addresses](connectivity-architecture.md#gateway-ip-addresses) in the region on TCP port 1433. Also, allow port range 11000 thru 11999 when connecting from a client located within Azure (for example, an Azure VM) or when your Connection Policy is set to Redirection. This recommendation is applicable to clients connecting from on-premises and also those connecting via Service Endpoints. For more information on port ranges, see [Connection policy](connectivity-architecture.md#connection-policy).
Connections made from applications using Microsoft JDBC Driver below version 4.0 might fail certificate validation. Lower versions of Microsoft JDBC rely on Common Name (CN) in the Subject field of the certificate. The mitigation is to ensure that the hostNameInCertificate property is set to *.database.windows.net. For more information on how to set the hostNameInCertificate property, see [Connecting with Encryption](/sql/connect/jdbc/connecting-with-ssl-encryption).
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-retention-overview.md
Long-term retention can be enabled for Azure SQL Database, and is available in p
## How long-term retention works
-Long-term backup retention (LTR) leverages the full database backups that are [automatically created](automated-backups-overview.md) to enable point-time restore (PITR). If an LTR policy is configured, these backups are copied to different blobs for long-term storage. The copy is a background job that has no performance impact on the database workload. The LTR policy for each database in SQL Database can also specify how frequently the LTR backups are created.
+Long-term backup retention (LTR) leverages the full database backups that are [automatically created](automated-backups-overview.md) to enable point in time restore (PITR). If an LTR policy is configured, these backups are copied to different blobs for long-term storage. The copy is a background job that has no performance impact on the database workload. The LTR policy for each database in SQL Database can also specify how frequently the LTR backups are created.
To enable LTR, you can define a policy using a combination of four parameters: weekly backup retention (W), monthly backup retention (M), yearly backup retention (Y), and week of year (WeekOfYear). If you specify W, one backup every week will be copied to the long-term storage. If you specify M, the first backup of each month will be copied to the long-term storage. If you specify Y, one backup during the week specified by WeekOfYear will be copied to the long-term storage. If the specified WeekOfYear is in the past when the policy is configured, the first LTR backup will be created in the following year. Each backup will be kept in the long-term storage according to the policy parameters that are configured when the LTR backup is created.
azure-video-analyzer Faq Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/faq-edge.md
- Title: FAQ about Azure Video Analyzer - Azure
-description: This article answers frequently asked questions about Azure Video Analyzer.
-- Previously updated : 03/26/2021--
-# FAQ about Azure Video Analyzer
-
-This article answers commonly asked questions about Azure Video Analyzer.
-
-## General
-
-**What system variables can I use in the pipeline topology definition?**
-
-| Variable | Description |
-| | |
-| System.Runtime.DateTime | Represents an instant in UTC time, typically expressed as a date and time of day in the following format:<br>*yyyyMMddTHHmmssZ* |
-| System.Runtime.PreciseDateTime | Represents a Coordinated Universal Time (UTC) date-time instance in an ISO8601 file-compliant format with milliseconds, in the following format:<br>*yyyyMMddTHHmmss.fffZ* |
-| System.TopologyName | Represents the name of the pipeline topology. |
-| System.PipelineName | Represents the name of the live pipeline. |
-
-> [!Note]
-> System.Runtime.DateTime and System.Runtime.PreciseDateTime cannot be used as part of the name of an Azure Video Analyzer video resource, in a video sink node. These variables can be used in a FileSink node, for naming the file.
-
-**What is the privacy policy for Video Analyzer?**
-
-Video Analyzer is covered by the [Microsoft Privacy Statement](https://privacy.microsoft.com/privacystatement). The privacy statement explains the personal data Microsoft processes, how Microsoft processes it, and for what purposes Microsoft processes it. To learn more about privacy, visit the [Microsoft Trust Center](https://www.microsoft.com/trustcenter).
-
-## Configuration and deployment
-
-**Can I deploy the edge module to a Windows 10 device?**
-
-Yes. For more information, see [Linux containers on Windows 10](/virtualization/windowscontainers/deploy-containers/linux-containers).
-
-## Capture from IP camera and RTSP settings
-
-**Do I need to use a special SDK on my device to send in a video stream?**
-
-No, Video Analyzer supports capturing media by using RTSP (Real-Time Streaming Protocol) for video streaming, which is supported on most IP cameras.
-
-**Can I push media to Video Analyzer by using protocols other than RTSP?**
-
-No, Video Analyzer supports only RTSP for capturing video from IP cameras. Any camera that supports RTSP streaming over TCP/HTTP should work.
-
-**Can I reset or update the RTSP source URL in a live pipeline?**
-
-Yes, when the live pipeline is in *inactive* state.
-
-**Is an RTSP simulator available to use during testing and development?**
-
-Yes, an [RTSP simulator]()<!--add-valid-link.md)--><!-- https://github.com/Azure/video-analyzer/tree/main/utilities/rtspsim-live555 --> edge module is available for use in the quickstarts and tutorials to support the learning process. This module is provided as best-effort and might not always be available. We recommend strongly that you *not* use the simulator for more than a few hours. You should invest in testing with your actual RTSP source before you plan a production deployment.
-
-## Design your AI model
-
-**I have multiple AI models wrapped in a Docker container. How should I use them with Azure Video Analyzer?**
-
-Solutions vary depending on the communication protocol that's used by the inferencing server to communicate with Azure Video Analyzer. The following sections describe how each protocol works.
-
-*Use the HTTP protocol*:
-
-* Single container (module named as *avaextension*):
-
- In your inferencing server, you can use a single port but different endpoints for different AI models. For example, for a Python sample you can use different `routes` per model as shown here:
-
- ```
- @app.route('/score/face_detection', methods=['POST'])
- …
- Your code specific to face detection model
-
- @app.route('/score/vehicle_detection', methods=['POST'])
- …
- Your code specific to vehicle detection model
- …
- ```
-
- And then in your Video Analyzer deployment, when you activate live pipelines set the inference server URL for each one as shown here:
-
- 1st live pipeline: inference server URL=`http://avaextension:44000/score/face_detection`<br/>
- 2nd live pipeline: inference server URL=`http://avaextension:44000/score/vehicle_detection`
-
- > [!NOTE]
- > Alternatively, you can expose your AI models on different ports and call them when you activate live pipelines.
-
-* Multiple containers:
-
- Each container is deployed with a different name. In the quickstarts and tutorials, we showed you how to deploy an extension named *avaextension*. Now you can develop two different containers, each with the same HTTP interface, which means they have the same `/score` endpoint. Deploy these two containers with different names, and ensure that both are listening on *different ports*.
-
- For example, one container named `avaextension1` is listening for the port `44000`, and a second container named `avaextension2` is listening for the port `44001`.
-
- In your Video Analyzer topology, you instantiate two live pipelines with different inference URLs, as shown here:
-
- 1st live pipeline: inference server URL = `http://avaextension1:44000/score`
- 2nd live pipeline: inference server URL = `http://avaextension2:44001/score`
-
-*Use the gRPC protocol*:
-
-* The gRPC extension node has a property `extensionConfiguration`, an optional string that can be used as a part of the gRPC contract. When you have multiple AI models packaged in a single inference server, you don't need to expose a node for every AI model. Instead, for a live pipeline, you, as the extension provider, can define how to select the different AI models by using the `extensionConfiguration` property. During execution Video Analyzer passes this string to your inferencing server, which can use it to invoke the desired AI model.
-
-**I'm building a gRPC server around an AI model, and I want to be able to support its use by multiple cameras or live pipelines. How should I build my server?**
-
- First, be sure that your server can either handle more than one request at a time or work in parallel threads.
-
-For example, a default number of parallel channels has been set in the following [Azure Video Analyzer gRPC sample]()<!--add-valid-link.md)--><!-- https://github.com/Azure/video-analyzer/tree/main/utilities/video-analysis/notebooks/Yolo/yolov3/yolov3-grpc-icpu-onnx/avaextension/server/server.py -->:
-
-```
-server = grpc.server(futures.ThreadPoolExecutor(max_workers=3))
-```
-
-In the preceding gRPC server instantiation, the server can open only three channels at a time per camera, or per live pipeline. Don't try to connect more than three instances to the server. If you do try to open more than three channels, requests will remain pending until an existing channel drops.
-
-The preceding gRPC server implementation is used in our Python samples. As a developer, you can implement your own server or use the preceding default implementation to increase the worker number, which you set to the number of cameras to use for video feeds.
-
-To set up and use multiple cameras, you can instantiate multiple live pipelines, each pointing to the same or a different inference server (for example, the server mentioned in the preceding paragraph).
-
-**I want to be able to receive multiple frames before I make an inferencing decision. How can I enable that?**
-
-Our current [default samples]()<!--add-valid-link.md)--><!--https://github.com/Azure/video-analyzer/tree/main/utilities/video-analysis--> work in a *stateless* mode. They don't keep the state of the previous calls, or the ID of the caller. This means that multiple live pipelines might call the same inference server, but the server can't distinguish who is calling or the state per caller.
-
-*Use the HTTP protocol*:
-
-To keep the state, each caller, or live pipeline, calls the inferencing server by using the HTTP query parameter that's unique to caller. For example, the inference server URL addresses for each live pipeline are shown here:
-
-1st live pipeline: `http://avaextension:44000/score?id=1`<br/>
-2nd live pipeline: `http://avaextension:44000/score?id=2`
-
-…
-
-On the server side, the `id` helps identify the caller. If `id`=1, then the server can keep the state separately for that live pipeline. It can then keep the received video frames in a buffer. For example, use an array, or a dictionary with a DateTime key, and the value is the frame. You can then define the server to process (infer) after *x* number of frames are received.
-
-*Use the gRPC protocol*:
-
-With a gRPC extension, each session is for a single camera feed, so there's no need to provide an identifier. With the extensionConfiguration property, you can store the video frames in a buffer and define the server to process (infer) after *x* number of frames are received.
-
-**Do all ProcessMediaStreams on a particular container run the same AI model?**
-
-No. Start or stop calls from the end user in a live pipeline constitute a session, or perhaps there's a camera disconnect or reconnect. The goal is to persist one session if the camera is streaming video.
-
-* Two cameras sending video for processing (to two separate live pipelines) creates two sessions.
-* One camera going to a live pipeline that has two gRPC extension nodes creates two sessions.
-
-Each session is a full duplex connection between Video Analyzer and the gRPC server, and each session can have a different model.
-
-> [!NOTE]
-> In case of a camera disconnect or reconnect, with the camera going offline for a period beyond tolerance limits, Video Analyzer will open a new session with the gRPC server. There's no requirement for the server to track the state across these sessions.
-
-Video Analyzer also adds support for multiple gRPC extensions for a single camera in a live pipeline. You can use these gRPC extensions to carry out AI processing sequentially, in parallel, or as a combination of both.
-
-> [!NOTE]
-> Having multiple extensions run in parallel will affect your hardware resources. Keep this in mind as you're choosing the hardware that suits your computational needs.
-
-**What is the maximum number of simultaneous ProcessMediaStreams?**
-
-Video Analyzer applies no limits to this number.
-
-**How can I decide whether my inferencing server should use CPU or GPU or any other hardware accelerator?**
-
-Your decision depends on the complexity of the developed AI model and how you want to use the CPU and hardware accelerators. As you're developing the AI model, you can specify what resources the model should use and what actions it should perform.
-
-**How do I view the bounding boxes generated by my inference server?**
-
-You can record the inference results along with the media in your video resource. You can use a [widget]()<!--add-valid-link.md)--><!-- pointer to widget md --> to play back the video with an overlay of the inference data.
-
-## gRPC compatibility
-
-**How will I know what the mandatory fields for the media stream descriptor are?**
-
-Any field that you don't supply a value to is given a [default value, as specified by gRPC](https://developers.google.com/protocol-buffers/docs/proto3#default).
-
-Video Analyzer uses the *proto3* version of the protocol buffer language. All the protocol buffer data that's used by Video Analyzer contracts is available in the [protocol buffer files]()<!--add-valid-link.md)--><!--https://github.com/Azure/azree-video-analyzer/tree/master/contracts/grpc-->.
-
-**How can I ensure that I'm using the latest protocol buffer files?**
-
-You can obtain the latest protocol buffer files on the [contract files site]()<!--add-valid-link.md)--><!--https://github.com/Azure/azree-video-analyzer/tree/master/contracts/grpc-->. Whenever we update the contract files, they'll be in this location. There's no immediate plan to update the protocol files, so look for the package name at the top of the files to know the version. It should read:
-
-```
-microsoft.azure.media.live_video_analytics.extensibility.grpc.v1
-```
-
-Any updates to these files will increment the "v-value" at the end of the name.
-
-> [!NOTE]
-> Because Video Analyzer uses the proto3 version of the language, the fields are optional, and the version is backward and forward compatible.
-
-**What gRPC features are available for me to use with Video Analyzer? Which features are mandatory and which are optional?**
-
-You can use any server-side gRPC features, provided that the Protocol Buffers (Protobuf) contract is fulfilled.
-
-## Monitoring and metrics
-
-**Can I monitor the pipeline on the edge by using Azure Event Grid?**
-
-Yes. You can consume [Prometheus metrics](monitor-log-edge.md#azure-monitor-collection-via-telegraf) and publish them to your event grid.
-
-**Can I use Azure Monitor to view the health, metrics, and performance of my pipelines in the cloud or on the edge?**
-
-Yes, we support this approach. To learn more, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md).
-
-**Are there any tools to make it easier to monitor the Azure Video Analyzer IoT Edge module?**
-
-Visual Studio Code supports the Azure IoT Tools extension, with which you can easily monitor the Video Analyzer edge module endpoints. You can use this tool to quickly start monitoring your IoT hub built-in endpoint for "events" and view the inference messages that are routed from the edge device to the cloud.
-
-In addition, you can use this extension to edit the module twin for the Video Analyzer edge module to modify the pipeline settings.
-
-For more information, see the [monitoring and logging](monitor-log-edge.md) article.
-
-## Billing and availability
-
-**How is Azure Video Analyzer billed?**
-
-For billing details, see [Video Analyzer pricing]()<!--add-valid-link.md)--><!--https://azure.microsoft.com/pricing/details/media-services/-->.
-
-## Next steps
-
-[Quickstart: Get started with Azure Video Analyzer](get-started-detect-motion-emit-events.md)
azure-video-analyzer Get Started Detect Motion Emit Events Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events-portal.md
Last updated 05/25/2021
# Quickstart: Get started with Azure Video Analyzer in the Azure portal
-This quickstart walks you through the steps to get started with Azure Video Analyzer. You'll create an Azure Video Analyzer account and its accompanying resources by using the Azure portal. You'll then deploy the Video Analyzer edge module and a Real Time Streaming Protocol (RTSP) camera simulator module to your Azure IoT Edge device.
+
+This quickstart walks you through the steps to get started with Azure Video Analyzer. You'll create an Azure Video Analyzer account and its accompanying resources by using the Azure portal. You'll then deploy the Video Analyzer edge module and a Real Time Streaming Protocol (RTSP) camera simulator module to your Azure IoT Edge device.
After you complete the setup steps, you'll be able to run the simulated live video stream through a pipeline that detects and reports any motion in that stream. The following diagram graphically represents that pipeline.
After you complete the setup steps, you'll be able to run the simulated live vid
## Prerequisites
-* An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
[!INCLUDE [the video analyzer account and storage account must be in the same subscription and region](./includes/note-account-storage-same-subscription.md)]
-* An IoT Edge device on which you have admin privileges:
- * [Deploy to an IoT Edge device](deploy-iot-edge-device.md)
- * [Deploy to an IoT Edge for Linux on Windows](deploy-iot-edge-linux-on-windows.md)
-* [Visual Studio Code](https://code.visualstudio.com/), with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) extension.
+
+- An IoT Edge device on which you have admin privileges:
+ - [Deploy to an IoT Edge device](deploy-iot-edge-device.md)
+ - [Deploy to an IoT Edge for Linux on Windows](deploy-iot-edge-linux-on-windows.md)
+- [Visual Studio Code](https://code.visualstudio.com/), with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) extension.
[!INCLUDE [install-docker-prompt](./includes/common-includes/install-docker-prompt.md)] ## Prepare your IoT Edge device+ The Azure Video Analyzer module should be configured to run on the IoT Edge device with a non-privileged local user account. The module needs certain local folders for storing application configuration data. The RTSP camera simulator module needs video files with which it can synthesize a live video feed. Run the following command on your IoT Edge device: `bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"`
-The prep-device script in that command automates the tasks of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges. After the command finishes successfully, you should see the following folders created on your edge device:
+The prep-device script in that command automates the tasks of creating input and configuration folders, downloading video input files, and creating user accounts with correct privileges. After the command finishes successfully, you should see the following folders created on your edge device:
-* */home/localedgeuser/samples*
-* */home/localedgeuser/samples/input*
-* */var/lib/videoanalyzer*
-* */var/media*
+- _/home/localedgeuser/samples_
+- _/home/localedgeuser/samples/input_
+- _/var/lib/videoanalyzer_
+- _/var/media_
-The video (*.mkv) files in the */home/localedgeuser/samples/input* folder are used to simulate live video.
+The video (_.mkv) files in the _/home/localedgeuser/samples/input\* folder are used to simulate live video.
## Create Azure resources+ The next step is to create the required Azure resources (Video Analyzer account, storage account, and user-assigned managed identity). Then you can create an optional container registry and register a Video Analyzer edge module with the Video Analyzer account. When you create an Azure Video Analyzer account, you have to associate an Azure storage account with it. If you use Video Analyzer to record the live video from a camera, that data is stored as blobs in a container in the storage account. You must use a managed identity to grant the Video Analyzer account the appropriate access to the storage account as follows.
When you create an Azure Video Analyzer account, you have to associate an Azure
1. Select **Video Analyzers** under **Services**. 1. Select **Add**. 1. In the **Create Video Analyzer account** section, enter these required values:
- - **Subscription**: Choose the subscription that you're using to create the Video Analyzer account.
- - **Resource group**: Choose a resource group where you're creating the Video Analyzer account, or select **Create new** to create a resource group.
- - **Video Analyzer account name**: Enter a name for your Video Analyzer account. The name must be all lowercase letters or numbers with no spaces, and 3 to 24 characters in length.
- - **Location**: Choose a location to deploy your Video Analyzer account (for example, **West US 2**).
- - **Storage account**: Create a storage account. We recommend that you select a [standard general-purpose v2](../../storage/common/storage-account-overview.md#types-of-storage-accounts) storage account.
- - **User identity**: Create and name a new user-assigned managed identity.
+
+ - **Subscription**: Choose the subscription that you're using to create the Video Analyzer account.
+ - **Resource group**: Choose a resource group where you're creating the Video Analyzer account, or select **Create new** to create a resource group.
+ - **Video Analyzer account name**: Enter a name for your Video Analyzer account. The name must be all lowercase letters or numbers with no spaces, and 3 to 24 characters in length.
+ - **Location**: Choose a location to deploy your Video Analyzer account (for example, **West US 2**).
+ - **Storage account**: Create a storage account. We recommend that you select a [standard general-purpose v2](../../storage/common/storage-account-overview.md#types-of-storage-accounts) storage account.
+ - **User identity**: Create and name a new user-assigned managed identity.
1. Select **Review + create** at the bottom of the form. ### Create a container registry+ 1. Select **Create a resource** > **Containers** > **Container Registry**. 1. On the **Basics** tab, enter values for **Resource group** and **Registry name**. Use the same resource group from the previous sections. The registry name must be unique within Azure and contain 5 to 50 alphanumeric characters. 1. Accept default values for the remaining settings. Then select **Review + create**. After you review the settings, select **Create**.
When you create an Azure Video Analyzer account, you have to associate an Azure
1. Select **Edge Modules** in the **Edge** pane. 1. Select **Add edge modules**, enter **avaedge** as the name for the new edge module, and select **Add**. 1. The **Copy the provisioning token** page appears on the right side of your screen. Copy the following snippet under **Recommended desired properties for IoT module deployment**. You'll need it in a later step.
- ```JSON
- {
- "applicationDataDirectory": "/var/lib/videoanalyzer",
- "ProvisioningToken": "XXXXXXX",
- "diagnosticsEventsOutputName": "diagnostics",
- "operationalEventsOutputName": "operational",
- "logLevel": "information",
- "LogCategories": "Application,Events",
- "allowUnsecuredEndpoints": true,
- "telemetryOptOut": false
- }
- ```
+ ```JSON
+ {
+ "applicationDataDirectory": "/var/lib/videoanalyzer",
+ "ProvisioningToken": "XXXXXXX",
+ "diagnosticsEventsOutputName": "diagnostics",
+ "operationalEventsOutputName": "operational",
+ "logLevel": "information",
+ "LogCategories": "Application,Events",
+ "allowUnsecuredEndpoints": true,
+ "telemetryOptOut": false
+ }
+ ```
1. Go to your Azure IoT Hub account. 1. Select **IoT Edge** under **Automatic Device Management**. 1. Select the **Device ID** value for your IoT Edge device.
When you create an Azure Video Analyzer account, you have to associate an Azure
1. Select **Environment Variables**. 1. Under **NAME**, enter **LOCAL_USER_ID**. Under **VALUE**, enter **1010**. 1. On the second row under **NAME**, enter **LOCAL_GROUP_ID**. Under **VALUE**, enter **1010**.
-1. Select **Container Create Options** and copy and paste the following lines:
- ```json
- {
- "HostConfig": {
- "LogConfig": {
- "Type": "",
- "Config": {
- "max-size": "10m",
- "max-file": "10"
- }
- },
- "Binds": [
- "/var/media/:/var/media/",
- "/var/lib/videoanalyzer/:/var/lib/videoanalyzer"
- ],
- "IpcMode": "host",
- "ShmSize": 1536870912
- }
- }
- ```
+1. Select **Container Create Options** and copy and paste the following lines:
+ ```json
+ {
+ "HostConfig": {
+ "LogConfig": {
+ "Type": "",
+ "Config": {
+ "max-size": "10m",
+ "max-file": "10"
+ }
+ },
+ "Binds": [
+ "/var/media/:/var/media/",
+ "/var/lib/videoanalyzer/:/var/lib/videoanalyzer"
+ ],
+ "IpcMode": "host",
+ "ShmSize": 1536870912
+ }
+ }
+ ```
1. Select **Module Twin Settings** and paste the snippet that you copied earlier from the **Copy the provisioning token** page in the Video Analyzer account.
- ```JSON
- {
- "applicationDataDirectory": "/var/lib/videoanalyzer",
- "ProvisioningToken": "XXXXXXX",
- "diagnosticsEventsOutputName": "diagnostics",
- "operationalEventsOutputName": "operational",
- "logLevel": "information",
- "LogCategories": "Application,Events",
- "allowUnsecuredEndpoints": true,
- "telemetryOptOut": false
- }
- ```
+ ```JSON
+ {
+ "applicationDataDirectory": "/var/lib/videoanalyzer",
+ "ProvisioningToken": "XXXXXXX",
+ "diagnosticsEventsOutputName": "diagnostics",
+ "operationalEventsOutputName": "operational",
+ "logLevel": "information",
+ "LogCategories": "Application,Events",
+ "allowUnsecuredEndpoints": true,
+ "telemetryOptOut": false
+ }
+ ```
1. Select **Add** at the bottom of your screen. 1. Select **Routes**. 1. Under **NAME**, enter **AVAToHub**. Under **VALUE**, enter `FROM /messages/modules/avaedge/outputs/* INTO $upstream`. 1. Select **Review + create**, and then select **Create** to deploy your **avaedge** edge module. ### Deploy the edge module for the RTSP camera simulator+ 1. Go to your IoT Hub account. 1. Select **IoT Edge** under **Automatic Device Management**. 1. Select the **Device ID** value for your IoT Edge device. 1. Select **Set modules**. 1. Select **Add**, and then select **IoT Edge Module** from the dropdown menu. 1. Enter **rtspsim** for **IoT Edge Module Name**.
-1. Copy and paste the following line into the **Image URI** field: `mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2`.
-1. Select **Container Create Options** and copy and paste the following lines:
- ```json
- {
- "HostConfig": {
- "Binds": [
- "/home/localedgeuser/samples/input:/live/mediaServer/media"
- ]
- }
- }
- ```
+1. Copy and paste the following line into the **Image URI** field: `mcr.microsoft.com/ava-utilities/rtspsim-live555:1.2`.
+1. Select **Container Create Options** and copy and paste the following lines:
+ ```json
+ {
+ "HostConfig": {
+ "Binds": ["/home/localedgeuser/samples/input:/live/mediaServer/media"]
+ }
+ }
+ ```
1. Select **Add** at the bottom of your screen. 1. Select **Review + create**, and then select **Create** to deploy your **rtspsim** edge module. ### Verify your deployment
-On the device details page, verify that the **avaedge** and **rtspsim** modules are listed as both **Specified in Deployment** and **Reported by Device**.
+On the device details page, verify that the **avaedge** and **rtspsim** modules are listed as both **Specified in Deployment** and **Reported by Device**.
-It might take a few moments for the modules to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status. Status code **200 -- OK** means that [the IoT Edge runtime](../../iot-edge/iot-edge-runtime.md) is healthy and is operating fine.
+It might take a few moments for the modules to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status. Status code **200 -- OK** means that [the IoT Edge runtime](../../iot-edge/iot-edge-runtime.md) is healthy and is operating fine.
![Screenshot that shows a status value for an IoT Edge runtime.](./media/deploy-iot-edge-device/status.png) - ## Set up your development environment ### Obtain your IoT Hub connection string
It might take a few moments for the modules to be started on the device and then
1. Select the **More Options** icon to see the context menu. Then select **Set IoT Hub Connection String**. 1. When an input box appears, enter your IoT Hub connection string. 1. In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see your device ID, which should have the following modules deployed:
- * Video Analyzer edge module (module name **avaedge**)
- * RTSP simulator (module name **rtspsim**)
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/modules-node.png" alt-text="Screenshot that shows the expanded Modules node.":::
+ - Video Analyzer edge module (module name **avaedge**)
+ - RTSP simulator (module name **rtspsim**)
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/get-started-detect-motion-emit-events/modules-node.png" alt-text="Screenshot that shows the expanded Modules node.":::
> [!TIP] > If you have [manually deployed Video Analyzer](deploy-iot-edge-device.md) on an edge device (such as an ARM64 device), the module will appear under that device, under Azure IoT Hub. You can select that module and continue with the following steps.
-### Prepare to monitor the modules
+### Prepare to monitor the modules
When you use this quickstart, events will be sent to IoT Hub. To see these events, follow these steps: 1. In Visual Studio Code, open the **Extensions** tab (or select Ctrl+Shift+X) and search for **Azure IoT Hub**. 1. Right-click the IoT Hub extension and select **Extension Settings**.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/extension-settings.png" alt-text="Screenshot that shows the selection of Extension Settings.":::
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/get-started-detect-motion-emit-events/extension-settings.png" alt-text="Screenshot that shows the selection of Extension Settings.":::
+ 1. Search for and enable **Show Verbose Message**.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/get-started-detect-motion-emit-events/verbose-message.png" alt-text="Screenshot of Show Verbose Message enabled.":::
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/get-started-detect-motion-emit-events/verbose-message.png" alt-text="Screenshot of Show Verbose Message enabled.":::
+ 1. Open the **Explorer** pane in Visual Studio Code, and look for **Azure IoT Hub** in the lower-left corner. 1. Expand the **Devices** node. 1. Right-click your device ID, and select **Start Monitoring Built-in Event Endpoint**.
- > [!NOTE]
- > You might be asked to provide built-in endpoint information for IoT Hub. To get that information, in the Azure portal, go to your IoT Hub account and look for **Built-in endpoints** in the left pane. Select it and look for the **Event Hub-compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
- >
- > ```
- > Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
- > ```
+ > [!NOTE]
+ > You might be asked to provide built-in endpoint information for IoT Hub. To get that information, in the Azure portal, go to your IoT Hub account and look for **Built-in endpoints** in the left pane. Select it and look for the **Event Hub-compatible endpoint** section. Copy and use the text in the box. The endpoint will look something like this:
+ >
+ > ```
+ > Endpoint=sb://iothub-ns-xxx.servicebus.windows.net/;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX;EntityPath=<IoT Hub name>
+ > ```
## Use direct method calls
-You can now analyze live video streams by invoking direct methods that the Video Analyzer edge module exposes. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods that the module provides.
+You can now analyze live video streams by invoking direct methods that the Video Analyzer edge module exposes. Read [Video Analyzer direct methods](direct-methods.md) to examine all the direct methods that the module provides.
### Enumerate pipeline topologies
This step enumerates all the [pipeline topologies](pipeline.md) in the module.
1. Right-click the **avaedge** module and select **Invoke Module Direct Method** from the shortcut menu. 1. Type **pipelineTopologyList** in the edit box and select the Enter key. 1. Copy the following JSON payload and paste it in the edit box, and then select the Enter key.
-
- ```json
- {
- "@apiVersion" : "1.0"
- }
- ```
+
+ ```json
+ {
+ "@apiVersion": "1.0"
+ }
+ ```
Within a few seconds, the following response appears in the **OUTPUT** window:
-
+ ``` [DirectMethod] Invoking Direct Method [pipelineTopologyList] to [deviceId/avaedge] ... [DirectMethod] Response from [deviceId/avaedge]:
That response is expected, because no pipeline topologies have been created.
### Set a pipeline topology
-By using the same steps described earlier, you can invoke `pipelineTopologySet` to set a pipeline topology by using the following JSON as the payload. You'll create a pipeline topology named *MotionDetection*.
-
+By using the same steps described earlier, you can invoke `pipelineTopologySet` to set a pipeline topology by using the following JSON as the payload. You'll create a pipeline topology named _MotionDetection_.
```json {
- "@apiVersion": "1.0",
- "name": "MotionDetection",
- "properties": {
- "description": "Analyzing live video to detect motion and emit events",
- "parameters": [
- {
- "name": "rtspUrl",
- "type": "string",
- "description": "rtspUrl"
- },
- {
- "name": "rtspUserName",
- "type": "string",
- "description": "rtspUserName",
- "default": "dummyUserName"
- },
- {
- "name": "rtspPassword",
- "type": "string",
- "description": "rtspPassword",
- "default": "dummypw"
- }
- ],
- "sources": [
- {
- "@type": "#Microsoft.VideoAnalyzer.RtspSource",
- "name": "rtspSource",
- "transport": "tcp",
- "endpoint": {
- "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
- "credentials": {
- "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
- "username": "${rtspUserName}",
- "password": "${rtspPassword}"
- },
- "url": "${rtspUrl}"
- }
- }
- ],
- "processors": [
- {
- "@type": "#Microsoft.VideoAnalyzer.MotionDetectionProcessor",
- "sensitivity": "medium",
- "name": "motionDetection",
- "inputs": [
- {
- "nodeName": "rtspSource",
- "outputSelectors": []
- }
- ]
- }
- ],
- "sinks": [
- {
- "hubOutputName": "inferenceOutput",
- "@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
- "name": "iotHubSink",
- "inputs": [
- {
- "nodeName": "motionDetection"
- }
- ]
- }
+ "@apiVersion": "1.0",
+ "name": "MotionDetection",
+ "properties": {
+ "description": "Analyzing live video to detect motion and emit events",
+ "parameters": [
+ {
+ "name": "rtspUrl",
+ "type": "string",
+ "description": "rtspUrl"
+ },
+ {
+ "name": "rtspUserName",
+ "type": "string",
+ "description": "rtspUserName",
+ "default": "dummyUserName"
+ },
+ {
+ "name": "rtspPassword",
+ "type": "string",
+ "description": "rtspPassword",
+ "default": "dummypw"
+ }
+ ],
+ "sources": [
+ {
+ "@type": "#Microsoft.VideoAnalyzer.RtspSource",
+ "name": "rtspSource",
+ "transport": "tcp",
+ "endpoint": {
+ "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
+ "credentials": {
+ "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
+ "username": "${rtspUserName}",
+ "password": "${rtspPassword}"
+ },
+ "url": "${rtspUrl}"
+ }
+ }
+ ],
+ "processors": [
+ {
+ "@type": "#Microsoft.VideoAnalyzer.MotionDetectionProcessor",
+ "sensitivity": "medium",
+ "name": "motionDetection",
+ "inputs": [
+ {
+ "nodeName": "rtspSource",
+ "outputSelectors": []
+ }
]
- }
+ }
+ ],
+ "sinks": [
+ {
+ "hubOutputName": "inferenceOutput",
+ "@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
+ "name": "iotHubSink",
+ "inputs": [
+ {
+ "nodeName": "motionDetection"
+ }
+ ]
+ }
+ ]
+ }
} ```
The returned status is 201. This status indicates that a new topology was create
Try the following next steps:
-* Invoke `pipelineTopologySet` again. The returned status code is 200. This code indicates that an existing topology was successfully updated.
-* Invoke `pipelineTopologySet` again, but change the description string. The returned status code is 200, and the description is updated to the new value.
-* Invoke `pipelineTopologyList` as outlined in the previous section. Now you can see the *MotionDetection* topology in the returned payload.
+- Invoke `pipelineTopologySet` again. The returned status code is 200. This code indicates that an existing topology was successfully updated.
+- Invoke `pipelineTopologySet` again, but change the description string. The returned status code is 200, and the description is updated to the new value.
+- Invoke `pipelineTopologyList` as outlined in the previous section. Now you can see the _MotionDetection_ topology in the returned payload.
### Read the pipeline topology
Invoke `pipelineTopologyGet` by using the following payload:
```json {
- "@apiVersion" : "1.0",
- "name" : "MotionDetection"
+ "@apiVersion": "1.0",
+ "name": "MotionDetection"
} ```
Within a few seconds, the following response appears in the **OUTPUT** window:
In the response payload, notice these details:
-* The status code is 200, indicating success.
-* The payload includes the `createdAt` time stamp and the `lastModifiedAt` time stamp.
+- The status code is 200, indicating success.
+- The payload includes the `createdAt` time stamp and the `lastModifiedAt` time stamp.
### Create a live pipeline by using the topology
Next, create a live pipeline that references the preceding pipeline topology. In
```json {
- "@apiVersion" : "1.0",
- "name": "mdpipeline1",
- "properties": {
- "topologyName": "MotionDetection",
- "description": "Sample pipeline description",
- "parameters": [
- {
- "name": "rtspUrl",
- "value": "rtsp://rtspsim:554/media/camera-300s.mkv"
- },
- {
- "name": "rtspUserName",
- "value": "testuser"
- },
- {
- "name": "rtspPassword",
- "value": "testpassword"
- }
- ]
- }
+ "@apiVersion": "1.0",
+ "name": "mdpipeline1",
+ "properties": {
+ "topologyName": "MotionDetection",
+ "description": "Sample pipeline description",
+ "parameters": [
+ {
+ "name": "rtspUrl",
+ "value": "rtsp://rtspsim:554/media/camera-300s.mkv"
+ },
+ {
+ "name": "rtspUserName",
+ "value": "testuser"
+ },
+ {
+ "name": "rtspPassword",
+ "value": "testpassword"
+ }
+ ]
+ }
} ``` Notice that this payload:
-* Specifies the topology (*MotionDetection*) that the live pipeline will use.
-* Contains a parameter value for `rtspUrl`, which did not have a default value in the topology payload. This value is a link to the following sample video:
+- Specifies the topology (_MotionDetection_) that the live pipeline will use.
+- Contains a parameter value for `rtspUrl`, which did not have a default value in the topology payload. This value is a link to the following sample video:
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4LTY4] - Within few seconds, the following response appears in the **OUTPUT** window: ```json
Within few seconds, the following response appears in the **OUTPUT** window:
In the response payload, notice that:
-* The status code is 201, indicating a new live pipeline was created.
-* The state is `Inactive`, indicating that the live pipeline was created but not activated. For more information, see [Pipeline states](pipeline.md#pipeline-states).
+- The status code is 201, indicating a new live pipeline was created.
+- The state is `Inactive`, indicating that the live pipeline was created but not activated. For more information, see [Pipeline states](pipeline.md#pipeline-states).
Try the following direct methods as next steps:
-* Invoke `livePipelineSet` again with the same payload. Note that the returned status code is now 200.
-* Invoke `livePipelineSet` again but with a different description. Note the updated description in the response payload, indicating that the live pipeline was successfully updated.
-* Invoke `livePipelineSet`, but change the name to `mdpipeline2` and change `rtspUrl` to `rtsp://rtspsim:554/media/lots_015.mkv`. In the response payload, note the newly created live pipeline (that is, status code 201).
-
+- Invoke `livePipelineSet` again with the same payload. Note that the returned status code is now 200.
+- Invoke `livePipelineSet` again but with a different description. Note the updated description in the response payload, indicating that the live pipeline was successfully updated.
+- Invoke `livePipelineSet`, but change the name to `mdpipeline2` and change `rtspUrl` to `rtsp://rtspsim:554/media/lots_015.mkv`. In the response payload, note the newly created live pipeline (that is, status code 201).
+ > [!NOTE] > As explained in [Pipeline topologies](pipeline.md#pipeline-topologies), you can create multiple live pipelines, to analyze live video streams from many cameras by using the same pipeline topology. If you create more live pipelines, take care to delete them during the cleanup step.
You can activate the live pipeline to start the flow of (simulated) live video t
```json {
- "@apiVersion" : "1.0",
- "name" : "mdpipeline1"
+ "@apiVersion": "1.0",
+ "name": "mdpipeline1"
} ```
Invoke the `livePipelineGet` direct method with the following payload:
```json {
- "@apiVersion" : "1.0",
- "name" : "mdpipeline1"
+ "@apiVersion": "1.0",
+ "name": "mdpipeline1"
} ```
Within a few seconds, the following response appears in the **OUTPUT** window:
In the response payload, notice the following details:
-* The status code is 200, indicating success.
-* The state is `Active`, indicating that the live pipeline is now active.
+- The status code is 200, indicating success.
+- The state is `Active`, indicating that the live pipeline is now active.
## Observe results The live pipeline that you created and activated uses the motion detection processor node to detect motion in the incoming live video stream and sends events to the IoT Hub sink. These events are then relayed to IoT Hub as messages, which can now be observed. Messages in the **OUTPUT** window will have the following "body": - ```json { "timestamp": 145471641211899,
The live pipeline that you created and activated uses the motion detection proce
The `inferences` section indicates that the type is motion. It provides more data about the motion event. It also provides a bounding box for the region of the video frame (at the given time stamp) where motion was detected.
-
## Invoke more direct method calls to clean up Next, you can invoke direct methods to deactivate and delete the live pipeline (in that order).
Invoke the`livePipelineDeactivate` direct method with the following payload:
```json {
- "@apiVersion" : "1.0",
- "name" : "mdpipeline1"
+ "@apiVersion": "1.0",
+ "name": "mdpipeline1"
} ```
Within a few seconds, the following response appears in the **OUTPUT** window:
} ```
-The status code of 200 indicates that the live pipeline was successfully deactivated.
+The status code of 200 indicates that the live pipeline was successfully deactivated.
Next, try to invoke `livePipelineGet` as indicated previously in this article. Observe the state value.
Invoke the direct method `livePipelineDelete` with the following payload:
```json {
- "@apiVersion" : "1.0",
- "name" : "mdpipeline1"
+ "@apiVersion": "1.0",
+ "name": "mdpipeline1"
} ```
Within a few seconds, the following response appears in the **OUTPUT** window:
"payload": null } ```+ A status code of 200 indicates that the live pipeline was successfully deleted.
-If you also created the pipeline called *mdpipeline2*, then you can't delete the pipeline topology without also deleting this additional pipeline. Invoke the direct method `livePipelineDelete` again by using the following payload:
+If you also created the pipeline called _mdpipeline2_, then you can't delete the pipeline topology without also deleting this additional pipeline. Invoke the direct method `livePipelineDelete` again by using the following payload:
``` {
After all live pipelines have been deleted, you can invoke the `pipelineTopology
```json {
- "@apiVersion" : "1.0",
- "name" : "MotionDetection"
+ "@apiVersion": "1.0",
+ "name": "MotionDetection"
} ```
You can try to invoke `pipelineTopologyList` and observe that the module contain
## Clean up resources [!INCLUDE [prerequisites](./includes/common-includes/clean-up-resources.md)]
-
+ ## Next steps
-* Try the [quickstart for recording videos to the cloud when motion is detected](detect-motion-record-video-clips-cloud.md).
-* Try the [quickstart for analyzing live video](analyze-live-video-use-your-model-http.md).
-* Learn more about [diagnostic messages](monitor-log-edge.md).
+- Try the [quickstart for recording videos to the cloud when motion is detected](detect-motion-record-video-clips-cloud.md).
+- Try the [quickstart for analyzing live video](analyze-live-video-use-your-model-http.md).
+- Learn more about [diagnostic messages](monitor-log-edge.md).
azure-video-analyzer Monitor Log Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/monitor-log-edge.md
You can stop log collection by setting the value in **Module Identity Twin** to
## FAQ
-If you have questions, see the [monitoring and metrics FAQ](faq-edge.md#monitoring-and-metrics).
+If you have questions, see the [monitoring and metrics FAQ](faq-edge.yml#monitoring-and-metrics).
## Next steps
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
After connecting data sources to Azure Sentinel, you can create rules to generat
6. On the **Incident settings** tab, enable **Create incidents from alerts triggered by this analytics rule** and select **Next: Automated response**.
- :::image type="content" source="media/azure-security-integration/create-new-analytic-rule-wizard.png" alt-text="Screenshot showing the Analytic rule wizard for creating a new rule in Azure Sentinel.":::
+ :::image type="content" source="../sentinel/media/tutorial-detect-threats-custom/general-tab.png" alt-text="Screenshot showing the Analytic rule wizard for creating a new rule in Azure Sentinel.":::
7. Select **Next: Review**.
You can create queries or use the available pre-defined query in Azure Sentinel
>[!TIP] >You can also create a new query by selecting **New Query**. >
- >:::image type="content" source="media/azure-security-integration/create-new-query.png" alt-text="Screenshot of Azure Sentinel Hunting page with + New Query highlighted.":::
+ >:::image type="content" source="../sentinel/media/hunting/save-query.png" alt-text="Screenshot of Azure Sentinel Hunting page with + New Query highlighted.":::
3. Select a query and then select **Run Query**.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Title: Platform updates for Azure VMware Solution description: Learn about the platform updates to Azure VMware Solution. Previously updated : 05/26/2021 Last updated : 07/20/2021 # Platform updates for Azure VMware Solution Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## July 20, 2021
+
+All new Azure VMware Solution private clouds are now deployed with NSX-T version 3.1.2. NSX-T version in existing private clouds will be upgraded through September, 2021 to NSX-T 3.1.2 release.
+
+You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
+
+For more information on this NSX-T version, see [VMware NSX-T Data Center 3.1.2 Release Notes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/VMware-NSX-T-Data-Center-312-Release-Notes.html).
++ ## May 25, 2021 Per VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) have been reported to VMware.
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
Title: Configure DHCP for Azure VMware Solution description: Learn how to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server. -+ Last updated 07/13/2021 # Customer intent: As an Azure service administrator, I want to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server.
azure-vmware Configure L2 Stretched Vmware Hcx Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-l2-stretched-vmware-hcx-networks.md
Title: Configure DHCP on L2 stretched VMware HCX networks description: Learn how to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server. + Last updated 05/28/2021 # Customer intent: As an Azure service administrator, I want to configure DHCP on L2 stretched VMware HCX networks to send DHCP requests from my Azure VMware Solution VMs to a non-NSX-T DHCP server.
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
Title: Configure a site-to-site VPN in vWAN for Azure VMware Solution description: Learn how to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel into Azure VMware Solutions. + Last updated 06/30/2021
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
* **Connected**: Connectivity is established between Azure VPN gateway and on-premises VPN site. * **Disconnected**: Status is seen if, for any reason (on-premises or in Azure), the connection was disconnected. ++ 1. Download the VPN configuration file and apply it to the on-premises endpoint.
- 1. On the VPN (Site to site) page, near the top, select **Download VPN Config**. Azure creates a storage account in the resource group 'microsoft-network-[location]', where location is the location of the WAN. After you have applied the configuration to your VPN devices, you can delete this storage account.
+ 1. On the VPN (Site to site) page, near the top, select **Download VPN Config**. Azure creates a storage account in the resource group 'microsoft-network-\[location\]', where location is the location of the WAN. After you have applied the configuration to your VPN devices, you can delete this storage account.
1. Once the configuration file is created, select the link to download it.
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud. -+ Last updated 07/09/2021
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
Title: Tutorial - Configure networking for your VMware private cloud in Azure description: Learn to create and configure the networking needed to deploy your private cloud in Azure -+ Last updated 04/23/2021 #Customer intent: As a < type of user >, I want < what? > so that < why? >.
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Title: Peer on-premises environments to Azure VMware Solution description: Learn how to create ExpressRoute Global Reach peering to a private cloud in Azure VMware Solution. -+ Last updated 06/21/2021
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Title: Tutorial - Add an NSX-T network segment in Azure VMware Solution
-description: Learn how to add an NSX-T network segment to use for virtual machines (VMs) in vCenter.
+ Title: Tutorial - Add a network segment in Azure VMware Solution
+description: Learn how to add a network segment to use for virtual machines (VMs) in vCenter.
Last updated 07/16/2021
-# Tutorial: Add an NSX-T network segment in Azure VMware Solution
+# Tutorial: Add a network segment in Azure VMware Solution
After deploying Azure VMware Solution, you can configure an NSX-T network segment either from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manger, and vCenter. NSX-T comes pre-previsioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
backup Backup Azure Dataprotection Use Rest Api Create Update Blob Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-dataprotection-use-rest-api-create-update-blob-policy.md
A backup policy typically governs the retention and schedule of your backups. Si
The steps to create a backup policy for an Azure Recovery Services vault are outlined in the policy [REST API document](/rest/api/dataprotection/backup-policies/create-or-update). Let's use this document as a reference to create a policy for blobs in a storage account.
-## Create or update a policy
+## Create a policy
-To create or update an Azure Backup policy, use the following *PUT* operation
+> [!IMPORTANT]
+> Currently, we do not support updating or modifying an existing policy. An alternative is to create a new policy with the required details and assign it to the relevant backup instance.
+
+To create an Azure Backup policy, use the following *PUT* operation
```http PUT https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/backupVaults/{vaultName}/backupPolicies/{policyName}?api-version=2021-01-01
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sql-automation.md
Get-AzRecoveryServicesVault -Name "testvault" | Set-AzRecoveryServicesVaultConte
We plan on deprecating the vault context setting in accordance with Azure PowerShell guidelines. Instead, you can store or fetch the vault ID, and pass it to relevant commands, as follows: ```powershell
-$vaultID = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault" | select -ExpandProperty ID
+$testVault = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault"
+$testVault.ID
``` ## Configure a backup policy
For Azure VM backups and Azure File shares, Backup service can connect to these
```powershell $myVM = Get-AzVM -ResourceGroupName <VMRG Name> -Name <VMName>
-Register-AzRecoveryServicesBackupContainer -ResourceId $myVM.ID -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $targetVault.ID -Force
+Register-AzRecoveryServicesBackupContainer -ResourceId $myVM.ID -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $testVault.ID -Force
``` The command will return a 'backup container' of this resource and the status will be 'registered'
The command will return a 'backup container' of this resource and the status wil
Once the registration is done, Backup service will be able to list all the available SQL components within the VM. To view all the SQL components yet to be backed up to this vault use [Get-AzRecoveryServicesBackupProtectableItem](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupprotectableitem) PowerShell cmdlet ```powershell
-Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $targetVault.ID
+Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -VaultId $testVault.ID
``` The output will show all unprotected SQL components across all SQL VMs registered to this vault with Item Type and ServerName. You can further filter to a particular SQL VM by passing the '-Container' parameter or use the combination of 'Name' and 'ServerName' along with ItemType flag to arrive at a unique SQL item. ```powershell
-$SQLDB = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLDataBase -VaultId $targetVault.ID -Name "<Item Name>" -ServerName "<Server Name>"
+$SQLDB = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLDataBase -VaultId $testVault.ID -Name "<Item Name>" -ServerName "<Server Name>"
``` ### Configuring backup
master ConfigureBackup Completed 3/18/2019 6:00:21 PM
Once the machine is registered, Backup service will fetch the details of the DBs available then. If SQL DBs or SQL instances are added to the registered machine later, you need to manually trigger the backup service to perform a fresh 'inquiry' to get **all** the unprotected DBs (including the newly added ones) again. Use the [Initialize-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/initialize-azrecoveryservicesbackupprotectableitem) PowerShell cmdlet on the SQL VM to perform a fresh inquiry. The command waits until the operation is completed. Later use the [Get-AzRecoveryServicesBackupProtectableItem](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupprotectableitem) PowerShell cmdlet to get the list of latest unprotected SQL components. ```powershell
-$SQLContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -FriendlyName <VM name> -VaultId $targetvault.ID
-Initialize-AzRecoveryServicesBackupProtectableItem -Container $SQLContainer -WorkloadType MSSQL -VaultId $targetvault.ID
-Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLDataBase -VaultId $targetVault.ID
+$SQLContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -FriendlyName <VM name> -VaultId $testVault.ID
+Initialize-AzRecoveryServicesBackupProtectableItem -Container $SQLContainer -WorkloadType MSSQL -VaultId $testVault.ID
+Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLDataBase -VaultId $testVault.ID
``` Once the relevant protectable items are fetched, enable the backups as instructed in the [above section](#configuring-backup).
You can configure backup so all DBs added in the future are automatically protec
Since the instruction is to back up all future DBs, the operation is done at a SQLInstance level. ```powershell
-$SQLInstance = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLInstance -VaultId $targetVault.ID -Name "<Protectable Item name>" -ServerName "<Server Name>"
-Enable-AzRecoveryServicesBackupAutoProtection -InputItem $SQLInstance -BackupManagementType AzureWorkload -WorkloadType MSSQL -Policy $NewSQLPolicy -VaultId $targetvault.ID
+$SQLInstance = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLInstance -VaultId $testVault.ID -Name "<Protectable Item name>" -ServerName "<Server Name>"
+Enable-AzRecoveryServicesBackupAutoProtection -InputItem $SQLInstance -BackupManagementType AzureWorkload -WorkloadType MSSQL -Policy $NewSQLPolicy -VaultId $testVault.ID
``` Once the autoprotection intent is given, the inquiry into the machine to fetch newly added DBs takes place as a scheduled background task every 8 hours.
Check the prerequisites mentioned [here](restore-sql-database-azure-vm.md#restor
First fetch the relevant backed up SQL DB using the [Get-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) PowerShell cmdlet. ```powershell
-$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $targetVault.ID
+$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $testVault.ID
``` ### Fetch the relevant restore time
Use [Get-AzRecoveryServicesBackupRecoveryPoint](/powershell/module/az.recoveryse
```powershell $startDate = (Get-Date).AddDays(-7).ToUniversalTime() $endDate = (Get-Date).ToUniversalTime()
-Get-AzRecoveryServicesBackupRecoveryPoint -Item $bkpItem -VaultId $targetVault.ID -StartDate $startdate -EndDate $endDate
+Get-AzRecoveryServicesBackupRecoveryPoint -Item $bkpItem -VaultId $testVault.ID -StartDate $startdate -EndDate $endDate
``` The output is similar to the following example
RecoveryPointId RecoveryPointType RecoveryPointTime ItemName
Use the 'RecoveryPointId' filter or an array filter to fetch the relevant recovery point. ```powershell
-$FullRP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $bkpItem -VaultId $targetVault.ID -RecoveryPointId "6660368097802"
+$FullRP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $bkpItem -VaultId $testVault.ID -RecoveryPointId "6660368097802"
``` #### Fetch point-in-time recovery point
$FullRP = Get-AzRecoveryServicesBackupRecoveryPoint -Item $bkpItem -VaultId $tar
If you want to restore the DB to a certain point-in-time, use [Get-AzRecoveryServicesBackupRecoveryLogChain](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprecoverylogchain) PowerShell cmdlet. The cmdlet returns a list of dates that represent start and end times of an unbroken, continuous log chain for that SQL backup item. The desired point-in-time should be within this range. ```powershell
-Get-AzRecoveryServicesBackupRecoveryLogChain -Item $bkpItem -VaultId $targetVault.ID
+Get-AzRecoveryServicesBackupRecoveryLogChain -Item $bkpItem -VaultId $testVault.ID
``` The output will be similar to the following example.
To override the backed-up DB with data from the recovery point, just specify the
##### Original restore with distinct Recovery point ```powershell
-$OverwriteWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -OriginalWorkloadRestore -VaultId $targetVault.ID
+$OverwriteWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -OriginalWorkloadRestore -VaultId $testVault.ID
``` ##### Original restore with log point-in-time ```powershell
-$OverwriteWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -OriginalWorkloadRestore -VaultId $targetVault.ID
+$OverwriteWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -OriginalWorkloadRestore -VaultId $testVault.ID
``` #### Alternate workload restore
$OverwriteWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -Po
> [!IMPORTANT] > A backed up SQL DB can be restored as a new DB to another SQLInstance only, in a Azure VM 'registered' to this vault.
-As outlined above, if the target SQLInstance lies within another Azure VM, make sure it's [registered to this vault](#registering-the-sql-vm) and the relevant SQLInstance appears as a protectable item.
+As outlined above, if the target SQLInstance lies within another Azure VM, make sure it's [registered to this vault](#registering-the-sql-vm) and the relevant SQLInstance appears as a protectable item. In this document, let's assume that the target SQLInstance name is MSSQLSERVER within another VM "Contoso2".
```powershell
-$TargetContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $targetVault.ID
-$TargetInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -Name "<SQLInstance Name>" -ServerName "<SQL VM name>" -VaultId $targetVault.ID
+$TargetContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $testVault.ID -FriendlyName "Contoso2"
+$TargetInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -Name "MSSQLSERVER" -ServerName "Contoso2" -VaultId $testVault.ID
``` Then just pass the relevant recovery point, target SQL instance with the right flag as shown below and the target container under which the target SQL instance exists.
Then just pass the relevant recovery point, target SQL instance with the right f
##### Alternate restore with distinct Recovery point ```powershell
-$AnotherInstanceWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID -TargetContainer $TargetContainer[1]
+$AnotherInstanceWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $testVault.ID -TargetContainer $TargetContainer
``` ##### Alternate restore with log point-in-time ```powershell
-$AnotherInstanceWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID -TargetContainer $TargetContainer[1]
+$AnotherInstanceWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $testVault.ID -TargetContainer $TargetContainer
``` ##### Restore as Files
$TargetContainer= Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAp
##### Restore as files with distinct Recovery point ```powershell
-$FileRestoreWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -TargetContainer $TargetContainer -RestoreAsFiles -FilePath "<>" -VaultId $targetVault.ID
+$FileRestoreWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRP -TargetContainer $TargetContainer -RestoreAsFiles -FilePath "<>" -VaultId $testVault.ID
``` ##### Restore as files with log point-in-time from latest full ```powershell
-$FileRestoreWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -TargetContainer $TargetContainer -RestoreAsFiles -FilePath "<>" -VaultId $targetVault.ID
+$FileRestoreWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -TargetContainer $TargetContainer -RestoreAsFiles -FilePath "<>" -VaultId $testVault.ID
``` ##### Restore as files with log point-in-time from a specified full
$FileRestoreWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -
If you want to give a specific full that should be used for restore, use the following command: ```powershell
-$FileRestoreWithLogAndSpecificFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -FromFull $FullRP -TargetContainer $TargetContainer -RestoreAsFiles -FilePath "<>" -VaultId $targetVault.ID
+$FileRestoreWithLogAndSpecificFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -FromFull $FullRP -TargetContainer $TargetContainer -RestoreAsFiles -FilePath "<>" -VaultId $testVault.ID
``` The final recovery point configuration object obtained from [Get-AzRecoveryServicesBackupWorkloadRecoveryConfig](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupworkloadrecoveryconfig) PowerShell cmdlet has all the relevant information for restore and is as shown below.
If you have enabled cross region restore, then the recovery points will be repli
Fetch all the SQL backup items from the secondary region with the usual command but with an extra parameter to indicate that these items should be fetched from secondary region. ```powershell
-$secondaryBkpItems = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $targetVault.ID -UseSecondaryRegion
+$secondaryBkpItems = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $testVault.ID -UseSecondaryRegion
``` ##### Fetch distinct recovery points from secondary region
Use [Get-AzRecoveryServicesBackupRecoveryPoint](/powershell/module/az.recoveryse
```powershell $startDate = (Get-Date).AddDays(-7).ToUniversalTime() $endDate = (Get-Date).ToUniversalTime()
-Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpItems[0] -VaultId $targetVault.ID -StartDate $startdate -EndDate $endDate -UseSecondaryRegion
+Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpItems[0] -VaultId $testVault.ID -StartDate $startdate -EndDate $endDate -UseSecondaryRegion
``` The output is similar to the following example
RecoveryPointId RecoveryPointType RecoveryPointTime ItemName
Use the 'RecoveryPointId' filter or an array filter to fetch the relevant recovery point. ```powershell
-$FullRPFromSec = Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpItems[0] -VaultId $targetVault.ID -RecoveryPointId "6660368097802" -UseSecondaryRegion
+$FullRPFromSec = Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpItems[0] -VaultId $testVault.ID -RecoveryPointId "6660368097802" -UseSecondaryRegion
``` ##### Fetch log recovery points from secondary region
$FullRPFromSec = Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpIt
Use [Get-AzRecoveryServicesBackupRecoveryLogChain](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprecoverylogchain) PowerShell cmdlet with the parameter '*-UseSecondaryRegion*' which will return start and end times of an unbroken, continuous log chain for that SQL backup item from the secondary region. The desired point-in-time should be within this range. ```powershell
-Get-AzRecoveryServicesBackupRecoveryLogChain -Item $secondaryBkpItems[0] -VaultId $targetVault.ID -UseSecondaryRegion
+Get-AzRecoveryServicesBackupRecoveryLogChain -Item $secondaryBkpItems[0] -VaultId $testVault.ID -UseSecondaryRegion
``` The output will be similar to the following example.
The above output means that you can restore to any point-in-time between the dis
#### Fetch target server from secondary region
-From the secondary region, we need a vault and a target server registered to that vault. Once we have the secondary region target container and the SQL instance, we can re-use the existing cmdlets to generate a restore workload configuration.
+From the secondary region, we need a vault and a target server registered to that vault. Once we have the secondary region target container and the SQL instance, we can re-use the existing cmdlets to generate a restore workload configuration. In this document, let's assume that the VM name is "secondaryVM" and the Instance name within that VM is "MSSQLInstance"
First, we fetch the relevant vault present in the secondary region and then get the registered containers within that vault. ```powershell $PairedRegionVault = Get-AzRecoveryServicesVault -ResourceGroupName SecondaryRG -Name PairedVault
-$seccontainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $PairedRegionVault.ID
+$secContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $PairedRegionVault.ID -FriendlyName "secondaryVM"
```
-Once the registered container is chosen, then we fetch the SQL instances within the container to which the DB should be restored to.
+Once the registered container is chosen, then we fetch the SQL instances within the container to which the DB should be restored to. Here we assume that there is 1 SQL instance within the "secondaryVM" and we fetch that instance.
```powershell
-Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -VaultId $PairedRegionVault.ID -Container $seccontainer
-```
-
-From the output, choose the SQL server name and assign the output to a variable which will be used later for restore.
-
-```powershell
-$secSQLInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -VaultId $PairedRegionVault.ID -Container $seccontainer -ServerName "sqlserver-0.corp.contoso.com"
+$secSQLInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -VaultId $PairedRegionVault.ID -Container $secContainer
``` #### Prepare the recovery configuration
As documented [above](#determine-recovery-configuration) for the normal SQL rest
##### For full restores from secondary region ```powershell
-Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRPFromSec[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID -TargetContainer $seccontainer[1]
+Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRPFromSec[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID -TargetContainer $secContainer
``` ##### For log point in time restores from secondary region ```powershell
-Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $secondaryBkpItems[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID -TargetContainer $seccontainer[1]
+Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $secondaryBkpItems[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID -TargetContainer $secContainer
``` Once the relevant configuration is obtained for primary region restore or secondary region restore, the same restore command can be used to trigger restores and later tracked using the jobIDs.
Once the relevant configuration is obtained for primary region restore or second
Once the relevant recovery Config object is obtained and verified, use the [Restore-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) PowerShell cmdlet to start the restore process. ```powershell
-Restore-AzRecoveryServicesBackupItem -WLRecoveryConfig $AnotherInstanceWithLogConfig -VaultId $targetVault.ID
+Restore-AzRecoveryServicesBackupItem -WLRecoveryConfig $AnotherInstanceWithLogConfig -VaultId $testVault.ID
``` The restore operation returns a job to be tracked.
MSSQLSERVER/m... Restore InProgress 3/17/2019 10:02:45 AM
Once backup has been enabled for a DB, you can also trigger an on-demand backup for the DB using [Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/backup-azrecoveryservicesbackupitem) PowerShell cmdlet. The following example triggers a full backup on a SQL DB with compression enabled and the full backup should be retained for 60 days. ```powershell
-$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $targetVault.ID
+$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $testVault.ID
$endDate = (Get-Date).AddDays(60).ToUniversalTime()
-Backup-AzRecoveryServicesBackupItem -Item $bkpItem -BackupType Full -EnableCompression -VaultId $targetVault.ID -ExpiryDateTimeUTC $endDate
+Backup-AzRecoveryServicesBackupItem -Item $bkpItem -BackupType Full -EnableCompression -VaultId $testVault.ID -ExpiryDateTimeUTC $endDate
``` The on-demand backup command returns a job to be tracked.
Set-AzRecoveryServicesBackupProtectionPolicy -Policy $Pol -FixForInconsistentIte
To trigger re-registration of the SQL VM, fetch the relevant backup container and pass it to the register cmdlet. ```powershell
-$SQLContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -FriendlyName <VM name> -VaultId $targetvault.ID
-Register-AzRecoveryServicesBackupContainer -Container $SQLContainer -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $targetVault.ID
+$SQLContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -FriendlyName <VM name> -VaultId $testVault.ID
+Register-AzRecoveryServicesBackupContainer -Container $SQLContainer -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $testVault.ID
``` ### Stop protection
Register-AzRecoveryServicesBackupContainer -Container $SQLContainer -BackupManag
If you wish to stop protection, you can use the [Disable-AzRecoveryServicesBackupProtection](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupprotection) PowerShell cmdlet. This will stop the scheduled backups but the data backed up until now is retained forever. ```powershell
-$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $targetVault.ID
-Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.ID
+$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -Name "<backup item name>" -VaultId $testVault.ID
+Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $testVault.ID
``` #### Delete backup data
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.
In order to completely remove the stored backup data in the vault, just add '-RemoveRecoveryPoints' flag/switch to the ['disable' protection command](#retain-data). ```powershell
-Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.ID -RemoveRecoveryPoints
+Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $testVault.ID -RemoveRecoveryPoints
``` #### Disable auto protection
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.
If autoprotection was configured on an SQLInstance, you can disable it using the [Disable-AzRecoveryServicesBackupAutoProtection](/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackupautoprotection) PowerShell cmdlet. ```powershell
-$SQLInstance = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLInstance -VaultId $targetVault.ID -Name "<Protectable Item name>" -ServerName "<Server Name>"
-Disable-AzRecoveryServicesBackupAutoProtection -InputItem $SQLInstance -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $targetvault.ID
+$SQLInstance = Get-AzRecoveryServicesBackupProtectableItem -workloadType MSSQL -ItemType SQLInstance -VaultId $testVault.ID -Name "<Protectable Item name>" -ServerName "<Server Name>"
+Disable-AzRecoveryServicesBackupAutoProtection -InputItem $SQLInstance -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $testVault.ID
``` #### Unregister SQL VM
Disable-AzRecoveryServicesBackupAutoProtection -InputItem $SQLInstance -BackupMa
If all the DBs of a SQL server [are no longer protected and no backup data exists](#delete-backup-data), you can unregister the SQL VM from this vault. Only then you can protect DBs to another vault. Use [Unregister-AzRecoveryServicesBackupContainer](/powershell/module/az.recoveryservices/unregister-azrecoveryservicesbackupcontainer) PowerShell cmdlet to unregister the SQL VM. ```powershell
-$SQLContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -FriendlyName <VM name> -VaultId $targetvault.ID
- Unregister-AzRecoveryServicesBackupContainer -Container $SQLContainer -VaultId $targetvault.ID
+$SQLContainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -FriendlyName <VM name> -VaultId $testVault.ID
+ Unregister-AzRecoveryServicesBackupContainer -Container $SQLContainer -VaultId $testVault.ID
``` ### Track Azure Backup jobs
It's important to note that Azure Backup only tracks user triggered jobs in SQL
Users can track on-demand/user triggered operations with the JobID that's returned in the [output](#on-demand-backup) of asynchronous jobs such as backup. Use [Get-AzRecoveryServicesBackupJobDetail](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupjobdetail) PowerShell cmdlet to track job and its details. ```powershell
- Get-AzRecoveryServicesBackupJobDetails -JobId 2516bb1a-d3ef-4841-97a3-9ba455fb0637 -VaultId $targetVault.ID
+ Get-AzRecoveryServicesBackupJobDetails -JobId 2516bb1a-d3ef-4841-97a3-9ba455fb0637 -VaultId $testVault.ID
``` To get the list of on-demand jobs and their statuses from Azure Backup service, use [Get-AzRecoveryServicesBackupJob](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupjob) PowerShell cmdlet. The following example returns all the in-progress SQL jobs.
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 06/01/2021 Last updated : 07/20/2021
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| Dv4, Dsv4 | Not supported | | Ev3, Esv3 | All sizes, except for E64is_v3 | | Eav4, Easv4 | All sizes |
-| Edv4, Edsv4 | All sizes, except for Standard_E20d_v4, Standard_E20ds_v4, Standard_E80ids_v4 |
+| Edv4, Edsv4 | All sizes |
| Ev4, Esv4 | Not supported | | F, Fs | All sizes | | Fsv2 | All sizes |
batch Batch Quota Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-quota-limit.md
Title: Service quotas and limits description: Learn about default Azure Batch quotas, limits, and constraints, and how to request quota increases Previously updated : 04/06/2021 Last updated : 07/20/2021
These additional limits are set by the Batch service. Unlike [resource quotas](#
| **Resource** | **Maximum Limit** | | | | | [Concurrent tasks](batch-parallel-node-tasks.md) per compute node | 4 x number of node cores |
-| [Applications](batch-application-packages.md) per Batch account | 20 |
+| [Applications](batch-application-packages.md) per Batch account | 200 |
| Application packages per application | 40 | | Application packages per pool | 10 | | Maximum task lifetime | 180 days<sup>1</sup> |
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
This table lists accepted data types, when each data type should be used, and th
| Data type | Used for testing | Recommended quantity | Used for training | Recommended quantity | |--|--|-|-|-| | [Audio](#audio-data-for-testing) | Yes<br>Used for visual inspection | 5+ audio files | No | N/A |
-| [Audio + Human-labeled transcripts](#audio-and-human-labeled-transcript-data) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
| [Plain text](#plain-text-data-for-training) | No | N/a | Yes | 1-200 MB of related text | | [Pronunciation](#pronunciation-data-for-training) | No | N/a | Yes | 1 KB - 1 MB of pronunciation text |
+| [Audio + Human-labeled transcripts](#audio-and-human-labeled-transcript-data) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can only contain a single data type.
After your dataset is uploaded, you have a few options:
* You can navigate to the **Test models** tab to visually inspect quality with audio only data or evaluate accuracy with audio + human-labeled transcription data.
-## Audio and human-labeled transcript data
-
-Audio + human-labeled transcript data can be used for both training and testing purposes. To improve the acoustic aspects like slight accents, speaking styles, background noises, or to measure the accuracy of Microsoft's speech-to-text accuracy when processing your audio files, you must provide human-labeled transcriptions (word-by-word) for comparison. While human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind, the improvements in recognition will only be as good as the data provided. For that reason, it's important that only high-quality transcripts are uploaded.
-
-Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. While audio with low recording volume or disruptive background noise is not helpful, it should not hurt your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
-
-| Property | Value |
-|--|-|
-| File format | RIFF (WAV) |
-| Sample rate | 8,000 Hz or 16,000 Hz |
-| Channels | 1 (mono) |
-| Maximum length per audio | 2 hours (testing) / 60 s (training) |
-| Sample format | PCM, 16-bit |
-| Archive format | .zip |
-| Maximum zip size | 2 GB |
--
-> [!NOTE]
-> When uploading training and testing data, the .zip file size cannot exceed 2 GB. You can only test from a *single* dataset, be sure to keep it within the appropriate file size. Additionally, each training file cannot exceed 60 seconds otherwise it will error out.
-
-To address issues like word deletion or substitution, a significant amount of data is required to improve recognition. Generally, it's recommended to provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help to improve recognition results. The transcriptions for all WAV files should be contained in a single plain-text file. Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t).
-
-For example:
-
-<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
-
-```input
-speech01.wav speech recognition is awesome
-speech02.wav the quick brown fox jumped all over the place
-speech03.wav the lazy dog was not amused
-```
-
-> [!IMPORTANT]
-> Transcription should be encoded as UTF-8 byte order mark (BOM).
-
-The transcriptions are text-normalized so they can be processed by the system. However, there are some important normalizations that must be done before uploading the data to the Speech Studio. For the appropriate language to use when you prepare your transcriptions, see [How to create a human-labeled transcription](how-to-custom-speech-human-labeled-transcriptions.md)
-
-After you've gathered your audio files and corresponding transcriptions, package them as a single .zip file before uploading to the <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio </a>. Below is an example dataset with three audio files and a human-labeled transcription file:
-
-> [!div class="mx-imgBorder"]
-> ![Select audio from the Speech Portal](./media/custom-speech/custom-speech-audio-transcript-pairs.png)
-
-See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account) for a list of recommended regions for your Speech service subscriptions. Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day compared to just 1 hour per day in other regions. If model training cannot be completed within a week, the model will be marked as failed.
-
-Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
- ## Plain text data for training Domain related sentences can be used to improve accuracy when recognizing product names, or industry-specific jargon. Sentences can be provided as a single text file. To improve accuracy, use text data that is closer to the expected spoken utterances.
Use the following table to ensure that your related data file for pronunciations
| # of pronunciations per line | 1 | | Maximum file size | 1 MB (1 KB for free tier) |
+## Audio and human-labeled transcript data
+
+Audio + human-labeled transcript data can be used for both training and testing purposes. To improve the acoustic aspects like slight accents, speaking styles, background noises, or to measure the accuracy of Microsoft's speech-to-text accuracy when processing your audio files, you must provide human-labeled transcriptions (word-by-word) for comparison. While human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind, the improvements in recognition will only be as good as the data provided. For that reason, it's important that only high-quality transcripts are uploaded.
+
+Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. While audio with low recording volume or disruptive background noise is not helpful, it should not hurt your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
+
+| Property | Value |
+|--|-|
+| File format | RIFF (WAV) |
+| Sample rate | 8,000 Hz or 16,000 Hz |
+| Channels | 1 (mono) |
+| Maximum length per audio | 2 hours (testing) / 60 s (training) |
+| Sample format | PCM, 16-bit |
+| Archive format | .zip |
+| Maximum zip size | 2 GB |
++
+> [!NOTE]
+> When uploading training and testing data, the .zip file size cannot exceed 2 GB. You can only test from a *single* dataset, be sure to keep it within the appropriate file size. Additionally, each training file cannot exceed 60 seconds otherwise it will error out.
+
+To address issues like word deletion or substitution, a significant amount of data is required to improve recognition. Generally, it's recommended to provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help to improve recognition results. The transcriptions for all WAV files should be contained in a single plain-text file. Each line of the transcription file should contain the name of one of the audio files, followed by the corresponding transcription. The file name and transcription should be separated by a tab (\t).
+
+For example:
+
+<!-- The following example contains tabs. Don't accidentally convert these into spaces. -->
+
+```input
+speech01.wav speech recognition is awesome
+speech02.wav the quick brown fox jumped all over the place
+speech03.wav the lazy dog was not amused
+```
+
+> [!IMPORTANT]
+> Transcription should be encoded as UTF-8 byte order mark (BOM).
+
+The transcriptions are text-normalized so they can be processed by the system. However, there are some important normalizations that must be done before uploading the data to the Speech Studio. For the appropriate language to use when you prepare your transcriptions, see [How to create a human-labeled transcription](how-to-custom-speech-human-labeled-transcriptions.md)
+
+After you've gathered your audio files and corresponding transcriptions, package them as a single .zip file before uploading to the <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio </a>. Below is an example dataset with three audio files and a human-labeled transcription file:
+
+> [!div class="mx-imgBorder"]
+> ![Select audio from the Speech Portal](./media/custom-speech/custom-speech-audio-transcript-pairs.png)
+
+See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-account) for a list of recommended regions for your Speech service subscriptions. Setting up the Speech subscriptions in one of these regions will reduce the time it takes to train the model. In these regions, training can process about 10 hours of audio per day compared to just 1 hour per day in other regions. If model training cannot be completed within a week, the model will be marked as failed.
+
+Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
+ ## Audio data for testing Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind, audio data is used to inspect the accuracy of speech with regards to a specific model's performance. If you're looking to quantify the accuracy of a model, use [audio + human-labeled transcription data](#audio-and-human-labeled-transcript-data).
confidential-computing Enclave Aware Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/enclave-aware-containers.md
This solution allows you to bring existing ML trained model and run them confide
Get started with ML model lift and shift to ONNX runtime [here](https://aka.ms/confidentialinference)
-### Edgeless RT
-
-Edgeless RT is an open-source project that builds upon the Open Enclave SDK. It adds support for Go and additional C++ features. Get started with a simple confidential Go application using your familiar VS Code environment [here](https://github.com/edgelesssys/edgelessrt). For Edgeless applications on AKS follow instructions [here](https://github.com/edgelesssys/edgelessrt/blob/master/docs/ERTAzureAKSDeployment.md)
+### EGo
+The open source [EGo SDK](https://www.ego.dev) brings support for the Go programming language to enclaves. EGo builds upon the Open Enclave SDK. It aims to make it easy to build confidential microservices. Follow this [step-by-step guide](https://github.com/edgelesssys/ego/tree/master/samples/aks), to deploy an EGo-based service on AKS.
## Container Based Sample Implementations
Edgeless RT is an open-source project that builds upon the Open Enclave SDK. It
<!-- LINKS - internal --> [DC Virtual Machine](./virtual-machine-solutions.md)
-[Confidential Containers](./confidential-containers.md)
+[Confidential Containers](./confidential-containers.md)
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/overview.md
Build applications on top of confidential compute IaaS offerings in Azure.
### Azure Security Ensure your workloads are secure through verification methods and hardware-bound key management. - Attestation: [Microsoft Azure Attestation (Preview)](../attestation/overview.md)-- Key Management: Managed-HSM (Preview)
+- Key Management: Managed-HSM
### Develop Start using developing enclave-aware applications and deploy confidential algorithms using the confidential inferencing framework.
connectors Connectors Create Api Azure Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-azure-event-hubs.md
Title: Connect to Azure Event Hubs
description: Connect to your event hub, and add a trigger or an action to your workflow in Azure Logic Apps. ms.suite: integration-- Previously updated : 05/03/2021++ Last updated : 07/16/2021 tags: connectors
The following steps describe the general way to add a trigger, for example, **Wh
## Trigger polling behavior
-All Event Hubs triggers are *long-polling* triggers, which means that the trigger processes all the events and then waits 30 seconds per partition for more events to appear in your event hub.
+All Event Hubs triggers are long-polling triggers. This behavior means that when a trigger fires, the trigger processes all the events and waits 30 seconds for more events to appear in your event hub. By design, if no events appear in 30 seconds, the trigger is skipped. Otherwise, the trigger continues reading events until your event hub is empty. The next trigger poll happens based on the recurrence interval that you set in the trigger's properties.
For example, if the trigger is set up with four partitions, this delay might take up to two minutes before the trigger finishes polling all the partitions. If no events are received within this delay, the trigger run is skipped. Otherwise, the trigger continues reading events until your event hub is empty. The next trigger poll happens based on the recurrence interval that you specify in the trigger's properties.
+If you know the specific partition(s) where the messages appear, you can update the trigger to read events from only this or those partition(s) by setting the trigger's maximum and minimum partition keys. For more information, review the [Add Event Hubs trigger](#add-trigger) section.
+
## Trigger checkpoint behavior When an Event Hubs trigger reads events from each partition in an event hub, the trigger users its own state to maintain information about the stream offset (the event position in a partition) and the partitions from where the trigger reads events.
For all the operations and other technical information, such as properties, limi
## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-native-recurrence.md
This example shows how a Recurrence trigger definition might look in an underlyi
} ```
+The following example shows how to update the trigger definition so that the trigger runs only once on the last day of each month:
+
+```json
+"triggers": {
+ "Recurrence": {
+ "recurrence": {
+ "frequency": "Month",
+ "interval": 1,
+ "schedule": {
+ "monthDays": [-1]
+ }
+ },
+ "type": "Recurrence"
+ }
+}
+```
+ <a name="daylight-saving-standard-time"></a> ## Trigger recurrence shift between daylight saving time and standard time
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-system-variables.md
These system variables can be referenced anywhere in the pipeline JSON.
| @pipeline().TriggerName|Name of the trigger that invoked the pipeline | | @pipeline().TriggerTime|Time of the trigger run that invoked the pipeline. This is the time at which the trigger **actually** fired to invoke the pipeline run, and it may differ slightly from the trigger's scheduled time. | | @pipeline().GroupId | ID of the group to which pipeline run belongs. |
-| @pipeline()__?__.TriggeredByPipelineName | Name of the pipeline that trigger the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
-| @pipeline()__?__.TriggeredByPipelineRunId | Run id of the pipeline that trigger the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
+| @pipeline()?.TriggeredByPipelineName | Name of the pipeline that trigger the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
+| @pipeline()?.TriggeredByPipelineRunId | Run id of the pipeline that trigger the pipeline run. Applicable when the pipeline run is triggered by an ExecutePipeline activity. Evaluate to _Null_ when used in other circumstances. Note the question mark after @pipeline() |
>[!NOTE] >Trigger-related date/time system variables (in both pipeline and trigger scopes) return UTC dates in ISO 8601 format, for example, `2017-06-01T22:20:00.4061448Z`.
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool-metadata-driven.md
Each row in control table contains the metadata for one object (for example, one
| CopySinkSettings | Metadata of sink property in copy activity. It can be preCopyScript, tableOption etc. Here is an [example](connector-azure-sql-database.md#azure-sql-database-as-the-sink). | | CopyActivitySettings | Metadata of translator property in copy activity. It is used to define column mapping. | | TopLevelPipelineName | Top Pipeline name, which can copy this object. |
-| TriggerName | Trigger name, which can trigger the pipeline to copy this object. |
+| TriggerName | Trigger name, which can trigger the pipeline to copy this object. If debug run, the name is Sandbox. If manual execution, the name is Manual. |
| DataLoadingBehaviorSettings |Full load vs. delta load. | | TaskId | The order of objects to be copied following the TaskId in control table (ORDER BY [TaskId] DESC). If you have huge amounts of objects to be copied but only limited concurrent number of copied allowed, you can change the TaskId for each object to decide which objects can be copied earlier. The default value is 0. |
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-ssis-integration-runtime.md
Title: Create an Azure-SSIS integration runtime in Azure Data Factory
description: Learn how to create an Azure-SSIS integration runtime in Azure Data Factory so you can deploy and run SSIS packages in Azure. Previously updated : 06/04/2021 Last updated : 07/19/2021
The [Provisioning Azure-SSIS IR](./tutorial-deploy-ssis-packages-azure.md) tutor
- Use an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB. As a prerequisite, you need to configure virtual network permissions and settings for your Azure-SSIS IR to join a virtual network. -- Use Azure Active Directory (Azure AD) authentication with the managed identity for your data factory to connect to an Azure SQL Database server or managed instance. As a prerequisite, you need to add the managed identity for your data factory as a database user who can create an SSISDB instance.
+- Use Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your data factory to connect to an Azure SQL Database server or managed instance. As a prerequisite, you need to add the specified system/user-assigned managed identity for your data factory as a database user who can create an SSISDB instance.
- Join your Azure-SSIS IR to a virtual network, or configure a self-hosted IR as proxy for your Azure-SSIS IR to access data on-premises.
This article shows how to provision an Azure-SSIS IR by using the Azure portal,
- Add the IP address of the client machine, or a range of IP addresses that includes the IP address of the client machine, to the client IP address list in the firewall settings for the database server. For more information, see [Azure SQL Database server-level and database-level firewall rules](../azure-sql/database/firewall-configure.md).
- - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure AD authentication with the managed identity for your data factory. For the latter, you need to add the managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Enable Azure AD authentication for an Azure-SSIS IR](./enable-aad-authentication-azure-ssis-ir.md).
+ - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure AD authentication with the specified system/user-assigned managed identity for your data factory. For the latter, you need to add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Enable Azure AD authentication for an Azure-SSIS IR](./enable-aad-authentication-azure-ssis-ir.md).
- Confirm that your database server does not have an SSISDB instance already. The provisioning of an Azure-SSIS IR does not support using an existing SSISDB instance.
The following table compares certain features of an Azure SQL Database server an
| Feature | SQL Database| SQL Managed instance | ||--|| | **Scheduling** | The SQL Server Agent is not available.<br/><br/>See [Schedule a package execution in a Data Factory pipeline](/sql/integration-services/lift-shift/ssis-azure-schedule-packages#activity).| The Managed Instance Agent is available. |
-| **Authentication** | You can create an SSISDB instance with a contained database user who represents any Azure AD group with the managed identity of your data factory as a member in the **db_owner** role.<br/><br/>See [Enable Azure AD authentication to create an SSISDB in Azure SQL Database server](enable-aad-authentication-azure-ssis-ir.md#enable-azure-ad-on-azure-sql-database). | You can create an SSISDB instance with a contained database user who represents the managed identity of your data factory. <br/><br/>See [Enable Azure AD authentication to create an SSISDB in Azure SQL Managed Instance](enable-aad-authentication-azure-ssis-ir.md#enable-azure-ad-on-sql-managed-instance). |
+| **Authentication** | You can create an SSISDB instance with a contained database user who represents any Azure AD group with the managed identity of your data factory as a member in the **db_owner** role.<br/><br/>See [Enable Azure AD authentication to create an SSISDB in Azure SQL Database server](enable-aad-authentication-azure-ssis-ir.md#enable-azure-ad-authentication-on-azure-sql-database). | You can create an SSISDB instance with a contained database user who represents the managed identity of your data factory. <br/><br/>See [Enable Azure AD authentication to create an SSISDB in Azure SQL Managed Instance](enable-aad-authentication-azure-ssis-ir.md#enable-azure-ad-authentication-on-azure-sql-managed-instance). |
| **Service tier** | When you create an Azure-SSIS IR with your Azure SQL Database server, you can select the service tier for SSISDB. There are multiple service tiers. | When you create an Azure-SSIS IR with your managed instance, you can't select the service tier for SSISDB. All databases in your managed instance share the same resource allocated to that instance. | | **Virtual network** | Your Azure-SSIS IR can join an Azure Resource Manager virtual network if you use an Azure SQL Database server with IP firewall rules/virtual network service endpoints. | Your Azure-SSIS IR can join an Azure Resource Manager virtual network if you use a managed instance with private endpoint. The virtual network is required when you don't enable a public endpoint for your managed instance.<br/><br/>If you join your Azure-SSIS IR to the same virtual network as your managed instance, make sure that your Azure-SSIS IR is in a different subnet from your managed instance. If you join your Azure-SSIS IR to a different virtual network from your managed instance, we recommend either a virtual network peering or a network-to-network connection. See [Connect your application to an Azure SQL Database Managed Instance](../azure-sql/managed-instance/connect-application-instance.md). | | **Distributed transactions** | This feature is supported through elastic transactions. Microsoft Distributed Transaction Coordinator (MSDTC) transactions are not supported. If your SSIS packages use MSDTC to coordinate distributed transactions, consider migrating to elastic transactions for Azure SQL Database. For more information, see [Distributed transactions across cloud databases](../azure-sql/database/elastic-transactions-overview.md). | Not supported. |
On the home page, select the **Configure SSIS** tile to open the **Integration r
On the **General settings** page of **Integration runtime setup** pane, complete the following steps.
- ![General settings](./media/tutorial-create-azure-ssis-runtime-portal/general-settings.png)
+![General settings](./media/tutorial-create-azure-ssis-runtime-portal/general-settings.png)
- 1. For **Name**, enter the name of your integration runtime.
+1. For **Name**, enter the name of your integration runtime.
- 2. For **Description**, enter the description of your integration runtime.
+2. For **Description**, enter the description of your integration runtime.
- 3. For **Location**, select the location of your integration runtime. Only supported locations are displayed. We recommend that you select the same location of your database server to host SSISDB.
+3. For **Location**, select the location of your integration runtime. Only supported locations are displayed. We recommend that you select the same location of your database server to host SSISDB.
+
+4. For **Node Size**, select the size of the node in your integration runtime cluster. Only supported node sizes are displayed. Select a large node size (scale up) if you want to run many compute-intensive or memory-intensive packages.
- 4. For **Node Size**, select the size of the node in your integration runtime cluster. Only supported node sizes are displayed. Select a large node size (scale up) if you want to run many compute-intensive or memory-intensive packages.
> [!NOTE] > If you require [compute isolation](../azure-government/azure-secure-isolation-guidance.md#compute-isolation), please select the **Standard_E64i_v3** node size. This node size represents isolated virtual machines that consume their entire physical host and provide the necessary level of isolation required by certain workloads, such as the US Department of Defense's Impact Level 5 (IL5) workloads.
- 5. For **Node Number**, select the number of nodes in your integration runtime cluster. Only supported node numbers are displayed. Select a large cluster with many nodes (scale out) if you want to run many packages in parallel.
+5. For **Node Number**, select the number of nodes in your integration runtime cluster. Only supported node numbers are displayed. Select a large cluster with many nodes (scale out) if you want to run many packages in parallel.
- 6. For **Edition/License**, select the SQL Server edition for your integration runtime: Standard or Enterprise. Select Enterprise if you want to use advanced features on your integration runtime.
+6. For **Edition/License**, select the SQL Server edition for your integration runtime: Standard or Enterprise. Select Enterprise if you want to use advanced features on your integration runtime.
- 7. For **Save Money**, select the Azure Hybrid Benefit option for your integration runtime: **Yes** or **No**. Select **Yes** if you want to bring your own SQL Server license with Software Assurance to benefit from cost savings with hybrid use.
+7. For **Save Money**, select the Azure Hybrid Benefit option for your integration runtime: **Yes** or **No**. Select **Yes** if you want to bring your own SQL Server license with Software Assurance to benefit from cost savings with hybrid use.
- 8. Select **Continue**.
+8. Select **Continue**.
#### Deployment settings page
Regardless of your deployment model, if you want to use SQL Server Agent hosted
If you select the check box, complete the following steps to bring your own database server to host SSISDB that we'll create and manage on your behalf.
- ![Deployment settings for SSISDB](./media/tutorial-create-azure-ssis-runtime-portal/deployment-settings.png)
+![Deployment settings for SSISDB](./media/tutorial-create-azure-ssis-runtime-portal/deployment-settings.png)
- 1. For **Subscription**, select the Azure subscription that has your database server to host SSISDB.
+1. For **Subscription**, select the Azure subscription that has your database server to host SSISDB.
- 1. For **Location**, select the location of your database server to host SSISDB. We recommend that you select the same location of your integration runtime.
+1. For **Location**, select the location of your database server to host SSISDB. We recommend that you select the same location of your integration runtime.
- 1. For **Catalog Database Server Endpoint**, select the endpoint of your database server to host SSISDB.
+1. For **Catalog Database Server Endpoint**, select the endpoint of your database server to host SSISDB.
- Based on the selected database server, the SSISDB instance can be created on your behalf as a single database, as part of an elastic pool, or in a managed instance. It can be accessible in a public network or by joining a virtual network. For guidance in choosing the type of database server to host SSISDB, see [Compare SQL Database and SQL Managed Instance](../data-factory/create-azure-ssis-integration-runtime.md#comparison-of-sql-database-and-sql-managed-instance).
+ Based on the selected database server, the SSISDB instance can be created on your behalf as a single database, as part of an elastic pool, or in a managed instance. It can be accessible in a public network or by joining a virtual network. For guidance in choosing the type of database server to host SSISDB, see [Compare SQL Database and SQL Managed Instance](../data-factory/create-azure-ssis-integration-runtime.md#comparison-of-sql-database-and-sql-managed-instance).
- If you select an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR, you need to join your Azure-SSIS IR to a virtual network. For more information, see [Join an Azure-SSIS IR to a virtual network](./join-azure-ssis-integration-runtime-virtual-network.md).
+ If you select an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR, you need to join your Azure-SSIS IR to a virtual network. For more information, see [Join an Azure-SSIS IR to a virtual network](./join-azure-ssis-integration-runtime-virtual-network.md).
+
+1. Select either the **Use AAD authentication with the system managed identity for Data Factory** or **Use AAD authentication with a user-assigned managed identity for Data Factory** check box to choose Azure AD authentication method for Azure-SSIS IR to access your database server that hosts SSISDB. Don't select any of the check boxes to choose SQL authentication method instead.
- 1. Select the **Use Azure AD authentication with the managed identity for your ADF** check box to choose the authentication method for your database server to host SSISDB. You'll choose either SQL authentication or Azure AD authentication with the managed identity for your data factory.
+ If you select any of the check boxes, you'll need to add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to your database server. If you select the **Use AAD authentication with a user-assigned managed identity for Data Factory** check box, you can then select any existing credentials created using your specified user-assigned managed identities or create new ones. For more information, see [Enable Azure AD authentication for an Azure-SSIS IR](./enable-aad-authentication-azure-ssis-ir.md).
- If you select the check box, you'll need to add the managed identity for your data factory into an Azure AD group with access permissions to your database server. For more information, see [Enable Azure AD authentication for an Azure-SSIS IR](./enable-aad-authentication-azure-ssis-ir.md).
-
- 1. For **Admin Username**, enter the SQL authentication username for your database server to host SSISDB.
+1. For **Admin Username**, enter the SQL authentication username for your database server that hosts SSISDB.
- 1. For **Admin Password**, enter the SQL authentication password for your database server to host SSISDB.
+1. For **Admin Password**, enter the SQL authentication password for your database server that hosts SSISDB.
- 1. Select the **Use dual standby Azure-SSIS Integration Runtime pair with SSISDB failover** check box to configure a dual standby Azure SSIS IR pair that works in sync with Azure SQL Database/Managed Instance failover group for business continuity and disaster recovery (BCDR).
+1. Select the **Use dual standby Azure-SSIS Integration Runtime pair with SSISDB failover** check box to configure a dual standby Azure SSIS IR pair that works in sync with Azure SQL Database/Managed Instance failover group for business continuity and disaster recovery (BCDR).
- If you select the check box, enter a name to identify your pair of primary and secondary Azure-SSIS IRs in the **Dual standby pair name** text box. You need to enter the same pair name when creating your primary and secondary Azure-SSIS IRs.
+ If you select the check box, enter a name to identify your pair of primary and secondary Azure-SSIS IRs in the **Dual standby pair name** text box. You need to enter the same pair name when creating your primary and secondary Azure-SSIS IRs.
- For more information, see [Configure your Azure-SSIS IR for BCDR](./configure-bcdr-azure-ssis-integration-runtime.md).
+ For more information, see [Configure your Azure-SSIS IR for BCDR](./configure-bcdr-azure-ssis-integration-runtime.md).
- 1. For **Catalog Database Service Tier**, select the service tier for your database server to host SSISDB. Select the Basic, Standard, or Premium tier, or select an elastic pool name.
+1. For **Catalog Database Service Tier**, select the service tier for your database server to host SSISDB. Select the Basic, Standard, or Premium tier, or select an elastic pool name.
-Select **Test connection** when applicable and if it's successful, select **Continue**.
+Select **Test connection** when applicable, and if it's successful, select **Continue**.
> [!NOTE]
- > If you use Azure SQL Database server to host SSISDB, your data will be stored in geo-redundant storage for backups by default. If you don't want your data to be replicated in other regions, please follow the instructions to [Configure backup storage redundancy by using PowerShell](../azure-sql/database/automated-backups-overview.md?tabs=single-database#configure-backup-storage-redundancy-by-using-powershell).
+> If you use Azure SQL Database server to host SSISDB, your data will be stored in geo-redundant storage for backups by default. If you don't want your data to be replicated in other regions, please follow the instructions to [Configure backup storage redundancy by using PowerShell](../azure-sql/database/automated-backups-overview.md?tabs=single-database#configure-backup-storage-redundancy-by-using-powershell).
##### Creating Azure-SSIS IR package stores
On the **Add package store** pane, complete the following steps.
1. For **Database name**, enter `msdb`.
- 1. For **Authentication type**, select **SQL Authentication**, **Managed Identity**, or **Service Principal**.
+ 1. For **Authentication type**, select **SQL Authentication**, **Managed Identity**, **Service Principal**, or **User-Assigned Managed Identity**.
- If you select **SQL Authentication**, enter the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
- - If you select **Managed Identity**, grant your ADF managed identity access to your Azure SQL Managed Instance.
+ - If you select **Managed Identity**, grant the system managed identity for your ADF access to your Azure SQL Managed Instance.
- If you select **Service Principal**, enter the relevant **Service principal ID** and **Service principal key** or select your **Azure Key Vault** where it's stored as a secret.
+ - If you select **User-Assigned Managed Identity**, grant the specified user-assigned managed identity for your ADF access to your Azure SQL Managed Instance. You can then select any existing credentials created using your specified user-assigned managed identities or create new ones.
+ 1. If you select **File system**, enter the UNC path of folder where your packages are deployed for **Host**, as well as the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret. 1. Select **Test connection** when applicable and if it's successful, select **Create**.
On the **Advanced settings** page of **Integration runtime setup** pane, complet
On the **Summary** section, review all provisioning settings, bookmark the recommended documentation links, and select **Finish** to start the creation of your integration runtime.
- > [!NOTE]
- > Excluding any custom setup time, this process should finish within 5 minutes. But it might take 20-30 minutes for the Azure-SSIS IR to join a virtual network.
- >
- > If you use SSISDB, the Data Factory service will connect to your database server to prepare SSISDB. It also configures permissions and settings for your virtual network, if specified, and joins your Azure-SSIS IR to the virtual network.
- >
- > When you provision an Azure-SSIS IR, Access Redistributable and Azure Feature Pack for SSIS are also installed. These components provide connectivity to Excel files, Access files, and various Azure data sources, in addition to the data sources that built-in components already support. For more information about built-in/preinstalled components, see [Built-in/preinstalled components on Azure-SSIS IR](./built-in-preinstalled-components-ssis-integration-runtime.md). For more information about additional components that you can install, see [Custom setups for Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md).
+> [!NOTE]
+> Excluding any custom setup time, this process should finish within 5 minutes. But it might take 20-30 minutes for the Azure-SSIS IR to join a virtual network.
+>
+> If you use SSISDB, the Data Factory service will connect to your database server to prepare SSISDB. It also configures permissions and settings for your virtual network, if specified, and joins your Azure-SSIS IR to the virtual network.
+>
+> When you provision an Azure-SSIS IR, Access Redistributable and Azure Feature Pack for SSIS are also installed. These components provide connectivity to Excel files, Access files, and various Azure data sources, in addition to the data sources that built-in components already support. For more information about built-in/preinstalled components, see [Built-in/preinstalled components on Azure-SSIS IR](./built-in-preinstalled-components-ssis-integration-runtime.md). For more information about additional components that you can install, see [Custom setups for Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md).
#### Connections pane
If you don't use an Azure SQL Database server with IP firewall rules/virtual net
If you use managed instance to host SSISDB, you can omit the `CatalogPricingTier` parameter or pass an empty value for it. Otherwise, you can't omit it and must pass a valid value from the list of supported pricing tiers for Azure SQL Database. For more information, see [SQL Database resource limits](../azure-sql/database/resource-limits-logical-server.md).
-If you use Azure AD authentication with the managed identity for your data factory to connect to the database server, you can omit the `CatalogAdminCredential` parameter. But you must add the managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Enable Azure AD authentication for an Azure-SSIS IR](./enable-aad-authentication-azure-ssis-ir.md). Otherwise, you can't omit it and must pass a valid object formed from your server admin username and password for SQL authentication.
+If you use Azure AD authentication with the specified system/user-assigned managed identity for your data factory to connect to the database server, you can omit the `CatalogAdminCredential` parameter. But you must add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Enable Azure AD authentication for an Azure-SSIS IR](./enable-aad-authentication-azure-ssis-ir.md). Otherwise, you can't omit it and must pass a valid object formed from your server admin username and password for SQL authentication.
```powershell Set-AzDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
Title: Enable AAD for Azure SSIS Integration Runtime
-description: This article describes how to enable Azure Active Directory authentication with the managed identity for Azure Data Factory to create Azure-SSIS Integration Runtime.
+ Title: Enable Azure Active Directory authentication for Azure SSIS integration runtime
+description: This article describes how to enable Azure Active Directory authentication with the specified system/user-assigned managed identity for Azure Data Factory to create Azure-SSIS integration runtime.
ms.devlang: powershell Previously updated : 07/09/2020 Last updated : 07/19/2021
-# Enable Azure Active Directory authentication for Azure-SSIS Integration Runtime
+# Enable Azure Active Directory authentication for Azure-SSIS integration runtime
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article shows you how to enable Azure Active Directory (Azure AD) authentication with the managed identity for your Azure Data Factory (ADF) and use it instead of conventional authentication methods (like SQL authentication) to:
+This article shows you how to enable Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your Azure Data Factory (ADF) and use it instead of conventional authentication methods (like SQL authentication) to:
-- Create an Azure-SSIS Integration Runtime (IR) that will in turn provision SSIS catalog database (SSISDB) in SQL Database or SQL Managed Instance on your behalf.
+- Create an Azure-SSIS integration runtime (IR) that will in turn provision SSIS catalog database (SSISDB) in Azure SQL Database server/Managed Instance on your behalf.
- Connect to various Azure resources when running SSIS packages on Azure-SSIS IR. For more info about the managed identity for your ADF, see [Managed identity for Data Factory](./data-factory-service-identity.md). > [!NOTE]
+> - In this scenario, Azure AD authentication with the specified system/user-assigned managed identity for your ADF is only used in the creation and subsequent starting operations of your Azure-SSIS IR that will in turn provision and connect to SSISDB. For SSIS package executions, your Azure-SSIS IR will still connect to SSISDB using SQL authentication with fully managed accounts that are created during SSISDB provisioning.
>
-> - In this scenario, Azure AD authentication with the managed identity for your ADF is only used in the creation and subsequent starting operations of your SSIS IR that will in turn provision and connect to SSISDB. For SSIS package executions, your SSIS IR will still connect to SSISDB using SQL authentication with fully managed accounts that are created during SSISDB provisioning.
-> - If you have already created your SSIS IR using SQL authentication, you can not reconfigure it to use Azure AD authentication via PowerShell at this time, but you can do so via Azure portal/ADF app.
+> - If you have already created your Azure-SSIS IR using SQL authentication, you can not reconfigure it to use Azure AD authentication via PowerShell at this time, but you can do so via Azure portal/ADF app.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## Enable Azure AD on Azure SQL Database
+## Enable Azure AD authentication on Azure SQL Database
-SQL Database supports creating a database with an Azure AD user. First, you need to create an Azure AD group with the managed identity for your ADF as a member. Next, you need to set an Azure AD user as the Active Directory admin for your SQL Database and then connect to it on SQL Server Management Studio (SSMS) using that user. Finally, you need to create a contained user representing the Azure AD group, so the managed identity for your ADF can be used by Azure-SSIS IR to create SSISDB on your behalf.
+Azure SQL Database supports creating a database with an Azure AD user. First, you need to create an Azure AD group with the specified system/user-assigned managed identity for your ADF as a member. Next, you need to set an Azure AD user as the Active Directory admin for your Azure SQL Database server and then connect to it on SQL Server Management Studio (SSMS) using that user. Finally, you need to create a contained user representing the Azure AD group, so the specified system/user-assigned managed identity for your ADF can be used by Azure-SSIS IR to create SSISDB on your behalf.
-### Create an Azure AD group with the managed identity for your ADF as a member
+### Create an Azure AD group with the specified system/user-assigned managed identity for your ADF as a member
You can use an existing Azure AD group or create a new one using Azure AD PowerShell.
-1. Install the [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2) module.
+1. Install the [Azure AD PowerShell](/powershell/azure/active-directory/install-adv2.md) module.
2. Sign in using `Connect-AzureAD`, run the following cmdlet to create a group, and save it in a variable:
You can use an existing Azure AD group or create a new one using Azure AD PowerS
6de75f3c-8b2f-4bf4-b9f8-78cc60a18050 SSISIrGroup ```
-3. Add the managed identity for your ADF to the group. You can follow the article [Managed identity for Data Factory](./data-factory-service-identity.md) to get the principal Managed Identity Object ID (e.g. 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc, but do not use Managed Identity Application ID for this purpose).
+3. Add the specified system/user-assigned managed identity for your ADF to the group. You can follow the [Managed identity for Data Factory](./data-factory-service-identity.md) article to get the Object ID of specified system/user-assigned managed identity for your ADF (e.g. 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc, but do not use the Application ID for this purpose).
```powershell Add-AzureAdGroupMember -ObjectId $Group.ObjectId -RefObjectId 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
You can use an existing Azure AD group or create a new one using Azure AD PowerS
Get-AzureAdGroupMember -ObjectId $Group.ObjectId ```
-### Configure Azure AD authentication for SQL Database
+### Configure Azure AD authentication for Azure SQL Database
-You can [Configure and manage Azure AD authentication with SQL](../azure-sql/database/authentication-aad-configure.md) using the following steps:
+You can [Configure and manage Azure AD authentication for Azure SQL Database](../azure-sql/database/authentication-aad-configure.md) using the following steps:
1. In Azure portal, select **All services** -> **SQL servers** from the left-hand navigation.
-2. Select your server in SQL Database to be configured with Azure AD authentication.
+2. Select your Azure SQL Database server to be configured with Azure AD authentication.
3. In the **Settings** section of the blade, select **Active Directory admin**.
You can [Configure and manage Azure AD authentication with SQL](../azure-sql/da
6. In the command bar, select **Save.**
-### Create a contained user in SQL Database representing the Azure AD group
+### Create a contained user in Azure SQL Database representing the Azure AD group
-For this next step, you need [Microsoft SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS).
+For this next step, you need [SSMS](/sql/ssms/download-sql-server-management-studio-ssms.md).
1. Start SSMS.
-2. In the **Connect to Server** dialog, enter your server name in
- the **Server name** field.
+2. In the **Connect to Server** dialog, enter your server name in the **Server name** field.
-3. In the **Authentication** field, select **Active Directory - Universal with MFA support** (you can also use the other two Active Directory authentication types, see [Configure and manage Azure AD authentication with SQL](../azure-sql/database/authentication-aad-configure.md)).
+3. In the **Authentication** field, select **Active Directory - Universal with MFA support** (you can also use the other two Active Directory authentication types, see [Configure and manage Azure AD authentication for Azure SQL Database](../azure-sql/database/authentication-aad-configure.md)).
4. In the **User name** field, enter the name of Azure AD account that you set as the server administrator, e.g. testuser@xxxonline.com.
For this next step, you need [Microsoft SQL Server Management Studio](/sql/ssms
The command should complete successfully, granting the contained user the ability to create a database (SSISDB).
-10. If your SSISDB was created using SQL authentication and you want to switch to use Azure AD authentication for your Azure-SSIS IR to access it, first make sure that the steps to grant permission to the **master** database finished successfully. Then, right-click the **SSISDB** database and select **New query**.
+10. If your SSISDB was created using SQL authentication and you want to switch to use Azure AD authentication for your Azure-SSIS IR to access it, first make sure that the steps to grant permissions to the **master** database have finished successfully. Then, right-click on the **SSISDB** database and select **New query**.
11. In the query window, enter the following T-SQL command, and select **Execute** on the toolbar.
For this next step, you need [Microsoft SQL Server Management Studio](/sql/ssms
The command should complete successfully, granting the contained user the ability to access SSISDB.
-## Enable Azure AD on SQL Managed Instance
+## Enable Azure AD authentication on Azure SQL Managed Instance
-SQL Managed Instance supports creating a database with the managed identity for your ADF directly. You need not join the managed identity for your ADF to an Azure AD group nor create a contained user representing that group in SQL Managed Instance.
+Azure SQL Managed Instance supports creating a database with the specified system/user-assigned managed identity for your ADF directly. You need not join the specified system/user-assigned managed identity for your ADF to an Azure AD group nor create a contained user representing that group in Azure SQL Managed Instance.
### Configure Azure AD authentication for Azure SQL Managed Instance
-Follow the steps in [Provision an Azure Active Directory administrator for SQL Managed Instance](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance).
+Follow the steps in [Provision an Azure AD administrator for Azure SQL Managed Instance](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance).
-### Add the managed identity for your ADF as a user in SQL Managed Instance
+### Add the specified system/user-assigned managed identity for your ADF as a user in Azure SQL Managed Instance
-For this next step, you need [Microsoft SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS).
+For this next step, you need [SSMS](/sql/ssms/download-sql-server-management-studio-ssms.md).
1. Start SSMS.
-2. Connect to to SQL Managed Instance using a SQL Server account that is a **sysadmin**. This is a temporary limitation that will be removed once Azure AD server principals (logins) for Azure SQL Managed Instance becomes GA. You will see the following error if you try to use an Azure AD admin account to create the login: Msg 15247, Level 16, State 1, Line 1 User does not have permission to perform this action.
+2. Connect to Azure SQL Managed Instance using SQL Server account that is a **sysadmin**. This is a temporary limitation that will be removed once the support for Azure AD server principals (logins) on Azure SQL Managed Instance becomes generally available. You will see the following error if you try to use an Azure AD admin account to create the login: *Msg 15247, Level 16, State 1, Line 1 User does not have permission to perform this action*.
3. In the **Object Explorer**, expand the **Databases** -> **System Databases** folder. 4. Right-click on **master** database and select **New query**.
-5. In the query window, execute the following T-SQL script to add the managed identity for your ADF as a user
+5. In the query window, execute the following T-SQL script to add the specified system/user-assigned managed identity for your ADF as a user.
```sql
- CREATE LOGIN [{your ADF name}] FROM EXTERNAL PROVIDER
- ALTER SERVER ROLE [dbcreator] ADD MEMBER [{your ADF name}]
- ALTER SERVER ROLE [securityadmin] ADD MEMBER [{your ADF name}]
+ CREATE LOGIN [{your managed identity name}] FROM EXTERNAL PROVIDER
+ ALTER SERVER ROLE [dbcreator] ADD MEMBER [{your managed identity name}]
+ ALTER SERVER ROLE [securityadmin] ADD MEMBER [{your managed identity name}]
```+
+ If you use the system managed identity for your ADF, then *your managed identity name* should be your ADF name. If you use a user-assigned managed identity for your ADF, then *your managed identity name* should be the specified user-assigned managed identity name.
- The command should complete successfully, granting the managed identity for your ADF the ability to create a database (SSISDB).
+ The command should complete successfully, granting the system/user-assigned managed identity for your ADF the ability to create a database (SSISDB).
-6. If your SSISDB was created using SQL authentication and you want to switch to use Azure AD authentication for your Azure-SSIS IR to access it, first make sure that the steps to grant permission to the **master** database finished successfully. Then, right-click the **SSISDB** database and select **New query**.
+6. If your SSISDB was created using SQL authentication and you want to switch to use Azure AD authentication for your Azure-SSIS IR to access it, first make sure that the steps to grant permissions to the **master** database have finished successfully. Then, right-click on the **SSISDB** database and select **New query**.
7. In the query window, enter the following T-SQL command, and select **Execute** on the toolbar. ```sql
- CREATE USER [{your ADF name}] FOR LOGIN [{your ADF name}] WITH DEFAULT_SCHEMA = dbo
- ALTER ROLE db_owner ADD MEMBER [{your ADF name}]
+ CREATE USER [{your managed identity name}] FOR LOGIN [{your managed identity name}] WITH DEFAULT_SCHEMA = dbo
+ ALTER ROLE db_owner ADD MEMBER [{your managed identity name}]
```
- The command should complete successfully, granting the managed identity for your ADF the ability to access SSISDB.
+ The command should complete successfully, granting the system/user-assigned managed identity for your ADF the ability to access SSISDB.
## Provision Azure-SSIS IR in Azure portal/ADF app
-When you provision your Azure-SSIS IR in Azure portal/ADF app, on **SQL Settings** page, select **Use AAD authentication with the managed identity for your ADF** option. The following screenshot shows the settings for IR with SQL Database hosting SSISDB. For IR with SQL Managed Instance hosting SSISDB, the **Catalog Database Service Tier** and **Allow Azure services to access** settings are not applicable, while other settings are the same.
-
-For more info about how to create an Azure-SSIS IR, see [Create an Azure-SSIS integration runtime in Azure Data Factory](./create-azure-ssis-integration-runtime.md).
+When you provision your Azure-SSIS IR in Azure portal/ADF app, on the **Deployment settings** page, select the **Create SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance to store your projects/packages/environments/execution logs** check box and select either the **Use AAD authentication with the system managed identity for Data Factory** or **Use AAD authentication with a user-assigned managed identity for Data Factory** check box to choose Azure AD authentication method for Azure-SSIS IR to access your database server that hosts SSISDB.
-![Settings for the Azure-SSIS integration runtime](media/enable-aad-authentication-azure-ssis-ir/enable-aad-authentication.png)
+For more information, see [Create an Azure-SSIS IR in ADF](./create-azure-ssis-integration-runtime.md).
## Provision Azure-SSIS IR with PowerShell
To provision your Azure-SSIS IR with PowerShell, do the following things:
-Name $AzureSSISName ```
-## Run SSIS Packages with Managed Identity Authentication
+## Run SSIS packages using Azure AD authentication with the specified system/user-assigned managed identity for your ADF
-When you run SSIS packages on Azure-SSIS IR, you can use managed identity authentication to connect to various Azure resources. Currently we have already supported managed identity authentication in the following connection managers.
+When you run SSIS packages on Azure-SSIS IR, you can use Azure AD authentication with the specified system/user-assigned managed identity for your ADF to connect to various Azure resources. Currently we support Azure AD authentication with the specified system/user-assigned managed identity for your ADF on the following connection managers.
-- [OLE DB Connection Manager](/sql/integration-services/connection-manager/ole-db-connection-manager#managed-identities-for-azure-resources-authentication)
+- [OLEDB Connection Manager](/sql/integration-services/connection-manager/ole-db-connection-manager.md#managed-identities-for-azure-resources-authentication)
-- [ADO.NET Connection Manager](/sql/integration-services/connection-manager/ado-net-connection-manager#managed-identities-for-azure-resources-authentication)
+- [ADO.NET Connection Manager](/sql/integration-services/connection-manager/ado-net-connection-manager.md#managed-identities-for-azure-resources-authentication)
-- [Azure Storage Connection Manager](/sql/integration-services/connection-manager/azure-storage-connection-manager#managed-identities-for-azure-resources-authentication)
+- [Azure Storage Connection Manager](/sql/integration-services/connection-manager/azure-storage-connection-manager.md#managed-identities-for-azure-resources-authentication)
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/managed-virtual-network-private-endpoint.md
$privateEndpointResourceId = "subscriptions/${subscriptionId}/resourceGroups/${r
$integrationRuntimeResourceId = "subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/integrationRuntimes/${integrationRuntimeName}" # Create managed Virtual Network resource
-New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${vnetResourceId}" -Properties
+New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${vnetResourceId}" -Properties @{}
# Create managed private endpoint resource New-AzResource -ApiVersion "${apiVersion}" -ResourceId "${privateEndpointResourceId}" -Properties @{
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Known Facts about *ForEach*
**Resolution**
-* **Concurrency Limit:** If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress. The maximum pipeline concurrency allowed in Azure Data Factory is 10 pipelines .
+* **Concurrency Limit:** If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress.
* **Monitoring limits**: Go to the ADF authoring canvas, select your pipeline, and determine if it has a concurrency property assigned to it. If it does, go to the Monitoring view, and make sure there's nothing in the past 45 days that's in progress. If there is something in progress, you can cancel it and the new pipeline run should start. * **Transient Issues:** It is possible that your run was impacted by a transient network issue, credential failures, services outages etc. If this happens, Azure Data Factory has an internal recovery process that monitors all the runs and starts them when it notices something went wrong. This process happens every one hour, so if your run is stuck for more than an hour, create a support case.
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
Previously updated : 05/19/2021 Last updated : 07/19/2021 # Configure a self-hosted IR as a proxy for an Azure-SSIS IR in Azure Data Factory [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article describes how to run SQL Server Integration Services (SSIS) packages on an Azure-SSIS Integration Runtime (Azure-SSIS IR) in Azure Data Factory with a self-hosted integration runtime (self-hosted IR) configured as a proxy.
+This article describes how to run SQL Server Integration Services (SSIS) packages on an Azure-SSIS Integration Runtime (Azure-SSIS IR) in Azure Data Factory (ADF) with a self-hosted integration runtime (self-hosted IR) configured as a proxy.
With this feature, you can access data and run tasks on premises without having to [join your Azure-SSIS IR to a virtual network](./join-azure-ssis-integration-runtime-virtual-network.md). The feature is useful when your corporate network has a configuration too complex or a policy too restrictive for you to inject your Azure-SSIS IR into it.
Finally, you download and install the latest version of self-hosted IR, as well
### Enable Windows authentication for on-premises tasks
-If on-premises staging tasks and Execute SQL/Process Tasks on your self-hosted IR require Windows authentication, you must also [configure Windows authentication feature on your Azure-SSIS IR](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth).
+If on-premises staging tasks and Execute SQL/Process Tasks on your self-hosted IR require Windows authentication, you must also [configure Windows authentication feature on your Azure-SSIS IR](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth.md).
Your on-premises staging tasks and Execute SQL/Process Tasks will be invoked with the self-hosted IR service account (*NT SERVICE\DIAHostService*, by default), and your data stores will be accessed with the Windows authentication account. Both accounts require certain security policies to be assigned to them. On the self-hosted IR machine, go to **Local Security Policy** > **Local Policies** > **User Rights Assignment**, and then do the following:
Your on-premises staging tasks and Execute SQL/Process Tasks will be invoked wit
## Prepare the Azure Blob Storage linked service for staging
-If you haven't already done so, create an Azure Blob Storage linked service in the same data factory where your Azure-SSIS IR is set up. To do so, see [Create an Azure data factory-linked service](./quickstart-create-data-factory-portal.md#create-a-linked-service). Be sure to do the following:
+If you haven't already done so, create an Azure Blob Storage linked service in the same data factory where your Azure-SSIS IR is set up. To do so, see [Create an Azure Data Factory linked service](./quickstart-create-data-factory-portal.md#create-a-linked-service). Be sure to do the following:
- For **Data Store**, select **Azure Blob Storage**. - For **Connect via integration runtime**, select **AutoResolveIntegrationRuntime** (not your self-hosted IR), so we can ignore it and use your Azure-SSIS IR instead to fetch access credentials for your Azure Blob Storage.-- For **Authentication method**, select **Account key**, **SAS URI**, **Service Principal**, or **Managed Identity**.
+- For **Authentication method**, select **Account key**, **SAS URI**, **Service Principal**, **Managed Identity**, or **User-Assigned Managed Identity**.
>[!TIP]
->If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity** method, grant your ADF managed identity a proper role to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory authentication with ADF managed identity](/sql/integration-services/connection-manager/azure-storage-connection-manager#managed-identities-for-azure-resources-authentication).
+>If you select the **Service Principal** method, grant your service principal at least a *Storage Blob Data Contributor* role. For more information, see [Azure Blob Storage connector](connector-azure-blob-storage.md#linked-service-properties). If you select the **Managed Identity**/**User-Assigned Managed Identity** method, grant the specified system/user-assigned managed identity for your ADF a proper role to access Azure Blob Storage. For more information, see [Access Azure Blob Storage using Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your ADF](/sql/integration-services/connection-manager/azure-storage-connection-manager.md#managed-identities-for-azure-resources-authentication).
![Prepare the Azure Blob storage-linked service for staging](media/self-hosted-integration-runtime-proxy-ssis/shir-azure-blob-storage-linked-service.png)
If you need to use strong cryptography/more secure network protocol (TLS 1.2) an
## Next steps
-After you've configured your self-hosted IR as a proxy for your Azure-SSIS IR, you can deploy and run your packages to access data on-premises as Execute SSIS Package activities in Data Factory pipelines. To learn how, see [Run SSIS packages as Execute SSIS Package activities in Data Factory pipelines](./how-to-invoke-ssis-package-ssis-activity.md).
+After you've configured your self-hosted IR as a proxy for your Azure-SSIS IR, you can deploy and run your packages to access data on-premises as Execute SSIS Package activities in Data Factory pipelines. To learn how, see [Run SSIS packages as Execute SSIS Package activities in Data Factory pipelines](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory Tutorial Deploy Ssis Packages Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
ms.devlang: powershell Previously updated : 10/13/2020 Last updated : 07/19/2021
In this tutorial, you will:
- Add the IP address of the client machine, or a range of IP addresses that includes the IP address of the client machine, to the client IP address list in the firewall settings for the database server. For more information, see [Azure SQL Database server-level and database-level firewall rules](../azure-sql/database/firewall-configure.md).
- - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure AD authentication with the managed identity for your data factory. For the latter, you need to add the managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
+ - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure AD authentication with the specified system/user-assigned managed identity for your data factory. For the latter, you need to add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
- Confirm that your database server does not have an SSISDB instance already. The provisioning of an Azure-SSIS IR does not support using an existing SSISDB instance.
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
description: Learn how to provision the Azure-SSIS integration runtime in Azure
Previously updated : 06/04/2021 Last updated : 07/19/2021
In this tutorial, you complete the following steps:
- Add the IP address of the client machine, or a range of IP addresses that includes the IP address of the client machine, to the client IP address list in the firewall settings for the database server. For more information, see [Azure SQL Database server-level and database-level firewall rules](../azure-sql/database/firewall-configure.md).
- - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure AD authentication with the managed identity for your data factory. For the latter, you need to add the managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
+ - You can connect to the database server by using SQL authentication with your server admin credentials, or by using Azure Active Directory (Azure AD) authentication with the specified system/user-assigned managed identity for your data factory. For the latter, you need to add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to the database server. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
- Confirm that your database server does not have an SSISDB instance already. The provisioning of an Azure-SSIS IR does not support using an existing SSISDB instance.
If you select the check box, complete the following steps to bring your own data
If you select an Azure SQL Database server with IP firewall rules/virtual network service endpoints or a managed instance with private endpoint to host SSISDB, or if you require access to on-premises data without configuring a self-hosted IR, you need to join your Azure-SSIS IR to a virtual network. For more information, see [Create an Azure-SSIS IR in a virtual network](./create-azure-ssis-integration-runtime.md).
- 1. Select the **Use Azure AD authentication with the managed identity for your ADF** check box to choose the authentication method for your database server to host SSISDB. You'll choose either SQL authentication or Azure AD authentication with the managed identity for your data factory.
+ 1. Select either the **Use AAD authentication with the system managed identity for Data Factory** or **Use AAD authentication with a user-assigned managed identity for Data Factory** check box to choose Azure AD authentication method for Azure-SSIS IR to access your database server that hosts SSISDB. Don't select any of the check boxes to choose SQL authentication method instead.
- If you select the check box, you'll need to add the managed identity for your data factory into an Azure AD group with access permissions to your database server. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
-
- 1. For **Admin Username**, enter the SQL authentication username for your database server to host SSISDB.
+ If you select any of the check boxes, you'll need to add the specified system/user-assigned managed identity for your data factory into an Azure AD group with access permissions to your database server. If you select the **Use AAD authentication with a user-assigned managed identity for Data Factory** check box, you can then select any existing credentials created using your specified user-assigned managed identities or create new ones. For more information, see [Create an Azure-SSIS IR with Azure AD authentication](./create-azure-ssis-integration-runtime.md).
+
+ 1. For **Admin Username**, enter the SQL authentication username for your database server that hosts SSISDB.
- 1. For **Admin Password**, enter the SQL authentication password for your database server to host SSISDB.
+ 1. For **Admin Password**, enter the SQL authentication password for your database server that hosts SSISDB.
1. Select the **Use dual standby Azure-SSIS Integration Runtime pair with SSISDB failover** check box to configure a dual standby Azure SSIS IR pair that works in sync with Azure SQL Database/Managed Instance failover group for business continuity and disaster recovery (BCDR).
On the **Add package store** pane, complete the following steps.
1. For **Database name**, enter `msdb`.
- 1. For **Authentication type**, select **SQL Authentication**, **Managed Identity**, or **Service Principal**.
+ 1. For **Authentication type**, select **SQL Authentication**, **Managed Identity**, **Service Principal**, or **User-Assigned Managed Identity**.
- If you select **SQL Authentication**, enter the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
- - If you select **Managed Identity**, grant your ADF managed identity access to your Azure SQL Managed Instance.
+ - If you select **Managed Identity**, grant the system managed identity for your ADF access to your Azure SQL Managed Instance.
- If you select **Service Principal**, enter the relevant **Service principal ID** and **Service principal key** or select your **Azure Key Vault** where it's stored as a secret.
+
+ - If you select **User-Assigned Managed Identity**, grant the specified user-assigned managed identity for your ADF access to your Azure SQL Managed Instance. You can then select any existing credentials created using your specified user-assigned managed identities or create new ones.
1. If you select **File system**, enter the UNC path of folder where your packages are deployed for **Host**, as well as the relevant **Username** and **Password** or select your **Azure Key Vault** where it's stored as a secret.
data-factory Tutorial Incremental Copy Lastmodified Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-lastmodified-copy-data-tool.md
Previously updated : 07/05/2021 Last updated : 07/12/2021 # Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool
Prepare your Blob storage for the tutorial by completing these steps:
2. On the **Properties** page, take the following steps:
- a. Under **Task name**, enter **DeltaCopyFromBlobPipeline**.
+ 1. Under **Task type**, select **Built-in copy task**.
- b. Under **Task cadence or Task schedule**, select **Run regularly on schedule**.
+ 1. Under **Task cadence or task schedule**, select **Tumbling window**.
- c. Under **Trigger type**, select **Tumbling window**.
+ 1. Under **Recurrence**, enter **15 Minute(s)**.
- d. Under **Recurrence**, enter **15 Minute(s)**.
-
- e. Select **Next**.
-
- Data Factory creates a pipeline with the specified task name.
+ 1. Select **Next**.
![Copy data properties page](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/copy-data-tool-properties-page.png) 3. On the **Source data store** page, complete these steps:
- a. Select **Create new connection** to add a connection.
-
- b. Select **Azure Blob Storage** from the gallery, and then select **Continue**:
-
- ![Select Azure Blog Storage](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/source-data-store-page-select-blob.png)
+ 1. Select **+ New connection** to add a connection.
- c. On the **New Linked Service (Azure Blob Storage)** page, select your storage account from the **Storage account name** list. Test the connection and then select **Create**.
+ 1. Select **Azure Blob Storage** from the gallery, and then select **Continue**:
- d. Select the new linked service and then select **Next**:
+ ![Select Azure Blog Storage](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/source-data-store-page-select-blob.png)
- ![Select the new linked service](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/source-data-store-page-select-linkedservice.png)
+ 1. On the **New connection (Azure Blob Storage)** page, select your Azure subscription from the **Azure subscription** list and your storage account from the **Storage account name** list. Test the connection and then select **Create**.
-4. On the **Choose the input file or folder** page, complete the following steps:
+ 1. Select the newly created connection in the **Connection** block.
- a. Browse for and select the **source** folder, and then select **Choose**.
+ 1. In the **File or folder** section, select **Browse** and choose the **source** folder, and then select **OK**.
- ![Choose the input file or folder](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/choose-input-file-folder.png)
+ 1. Under **File loading behavior**, select **Incremental load: LastModifiedDate**, and choose **Binary copy**.
+
+ 1. Select **Next**.
- b. Under **File loading behavior**, select **Incremental load: LastModifiedDate**.
+ :::image type="content" source="./media/tutorial-incremental-copy-lastmodified-copy-data-tool/source-data-store-page.png" alt-text="Screenshot that shows the 'Source data store' page.":::
- c. Select **Binary copy** and then select **Next**:
+4. On the **Destination data store** page, complete these steps:
+ 1. Select the **AzureBlobStorage** connection that you created. This is the same storage account as the source data store.
- ![Choose the input file or folder page](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/check-binary-copy.png)
+ 1. In the **Folder path** section, browse for and select the **destination** folder, and then select **OK**.
-5. On the **Destination data store** page, select the **AzureBlobStorage** service that you created. This is the same storage account as the source data store. Then select **Next**.
+ 1. Select **Next**.
-6. On the **Choose the output file or folder** page, complete the following steps:
+ :::image type="content" source="./media/tutorial-incremental-copy-lastmodified-copy-data-tool/destination-data-store-page.png" alt-text="Screenshot that shows the 'Destination data store' page.":::
- a. Browse for and select the **destination** folder, and then select **Choose**:
+5. On the **Settings** page, under **Task name**, enter **DeltaCopyFromBlobPipeline**, then select **Next**. Data Factory creates a pipeline with the specified task name.
- ![Choose the output file or folder page](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/choose-output-file-folder.png)
+ :::image type="content" source="./media/tutorial-incremental-copy-lastmodified-copy-data-tool/settings-page.png" alt-text="Screenshot that shows the Settings page.":::
- b. Select **Next**.
-
-7. On the **Settings** page, select **Next**.
-
-8. On the **Summary** page, review the settings and then select **Next**.
+6. On the **Summary** page, review the settings and then select **Next**.
![Summary page](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/summary-page.png)
-9. On the **Deployment page**, select **Monitor** to monitor the pipeline (task).
+7. On the **Deployment** page, select **Monitor** to monitor the pipeline (task).
![Deployment page](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/deployment-page.png)
-10. Notice that the **Monitor** tab on the left is automatically selected. The application switches to the **Monitor** tab. You see the status of the pipeline. Select **Refresh** to refresh the list. Select the link under **PIPELINE NAME** to view activity run details or to run the pipeline again.
+8. Notice that the **Monitor** tab on the left is automatically selected. The application switches to the **Monitor** tab. You see the status of the pipeline. Select **Refresh** to refresh the list. Select the link under **Pipeline name** to view activity run details or to run the pipeline again.
![Refresh the list and view activity run details](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs-1.png)
-11. There's only one activity (the copy activity) in the pipeline, so you see only one entry. For details about the copy operation, select the **Details** link (the eyeglasses icon) in the **ACTIVITY NAME** column. For details about the properties, see [Copy activity overview](copy-activity-overview.md).
+9. There's only one activity (the copy activity) in the pipeline, so you see only one entry. For details about the copy operation, on the **Activity runs** page, select the **Details** link (the eyeglasses icon) in the **Activity name** column. For details about the properties, see [Copy activity overview](copy-activity-overview.md).
![Copy activity in the pipeline](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs2.png)
Prepare your Blob storage for the tutorial by completing these steps:
![No files in source container or destination container](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs3.png)
-12. Create an empty text file and name it **file1.txt**. Upload this text file to the source container in your storage account. You can use various tools to perform these tasks, like [Azure Storage Explorer](https://storageexplorer.com/).
+10. Create an empty text file and name it **file1.txt**. Upload this text file to the source container in your storage account. You can use various tools to perform these tasks, like [Azure Storage Explorer](https://storageexplorer.com/).
![Create file1.txt and upload it to the source container](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs3-1.png)
-13. To go back to the **Pipeline runs** view, select **All pipeline runs**, and wait for the same pipeline to be automatically triggered again.
+11. To go back to the **Pipeline runs** view, select **All pipeline runs** link in the breadcrumb menu on the **Activity runs** page, and wait for the same pipeline to be automatically triggered again.
-14. When the second pipeline run completes, follow the same steps mentioned previously to review the activity run details.
+12. When the second pipeline run completes, follow the same steps mentioned previously to review the activity run details.
You'll see that one file (file1.txt) has been copied from the source container to the destination container of your Blob storage account: ![file1.txt has been copied from the source container to the destination container](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs6.png)
-15. Create another empty text file and name it **file2.txt**. Upload this text file to the source container in your Blob storage account.
+13. Create another empty text file and name it **file2.txt**. Upload this text file to the source container in your Blob storage account.
-16. Repeat steps 13 and 14 for the second text file. You'll see that only the new file (file2.txt) was copied from the source container to the destination container of your storage account during this pipeline run.
+14. Repeat steps 11 and 12 for the second text file. You'll see that only the new file (file2.txt) was copied from the source container to the destination container of your storage account during this pipeline run.
You can also verify that only one file has been copied by using [Azure Storage Explorer](https://storageexplorer.com/) to scan the files: ![Scan files by using Azure Storage Explorer](./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs8.png) - ## Next steps Go to the following tutorial to learn how to transform data by using an Apache Spark cluster on Azure:
data-factory Tutorial Incremental Copy Partitioned File Name Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md
Previously updated : 07/05/2021 Last updated : 07/15/2021 # Incrementally copy new files based on time partitioned file name by using the Copy Data tool
In this tutorial, you perform the following steps:
Prepare your Blob storage for the tutorial by performing these steps.
-1. Create a container named **source**. Create a folder path as **2020/03/17/03** in your container. Create an empty text file, and name it as **file1.txt**. Upload the file1.txt to the folder path **source/2020/03/17/03** in your storage account. You can use various tools to perform these tasks, such as [Azure Storage Explorer](https://storageexplorer.com/).
+1. Create a container named **source**. Create a folder path as **2021/07/15/06** in your container. Create an empty text file, and name it as **file1.txt**. Upload the file1.txt to the folder path **source/2021/07/15/06** in your storage account. You can use various tools to perform these tasks, such as [Azure Storage Explorer](https://storageexplorer.com/).
![upload files](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/upload-file.png) > [!NOTE]
- > Please adjust the folder name with your UTC time. For example, if the current UTC time is 3:38 AM on March 17, 2020, you can create the folder path as **source/2020/03/17/03/** by the rule of **source/{Year}/{Month}/{Day}/{Hour}/**.
+ > Please adjust the folder name with your UTC time. For example, if the current UTC time is 6:10 AM on July 15, 2021, you can create the folder path as **source/2021/07/15/06/** by the rule of **source/{Year}/{Month}/{Day}/{Hour}/**.
2. Create a container named **destination**. You can use various tools to perform these tasks, such as [Azure Storage Explorer](https://storageexplorer.com/).
Prepare your Blob storage for the tutorial by performing these steps.
![Screenshot that shows the ADF home page.](./media/doc-common-process/get-started-page.png) 2. On the **Properties** page, take the following steps:
+ 1. Under **Task type**, choose **Built-in copy task**.
- a. Under **Task name**, enter **DeltaCopyFromBlobPipeline**.
+ 1. Under **Task cadence or task schedule**, select **Tumbling window**.
- b. Under **Task cadence or Task schedule**, select **Run regularly on schedule**.
+ 1. Under **Recurrence**, enter **1 Hour(s)**.
- c. Under **Trigger type**, select **Tumbling Window**.
-
- d. Under **Recurrence**, enter **1 Hour(s)**.
-
- e. Select **Next**.
-
- The Data Factory UI creates a pipeline with the specified task name.
+ 1. Select **Next**.
![Properties page](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/copy-data-tool-properties-page.png) 3. On the **Source data store** page, complete the following steps:
- a. Click **+ Create new connection** to add a connection
+ a. Select **+ New connection** to add a connection.
- b. Select Azure Blob Storage from the gallery, and then select Continue.
+ b. Select **Azure Blob Storage** from the gallery, and then select **Continue**.
- c. On the **New Linked Service (Azure Blob Storage)** page, enter a name for the linked service. Select your Azure subscription, and select your storage account from the **Storage account name** list. Test connection and then select **Create**.
+ c. On the **New connection (Azure Blob Storage)** page, enter a name for the connection. Select your Azure subscription, and select your storage account from the **Storage account name** list. Test connection and then select **Create**.
- ![Source data store page](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/source-data-store-page-linkedservice.png)
+ ![Source data store page](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/source-data-store-page-connection.png)
- d. Select the newly created linked service on the **Source data store** page, and then click **Next**.
+ d. On the **Source data store** page, select the newly created connection in the **Connection** section.
-4. On the **Choose the input file or folder** page, do the following steps:
+ e. In the **File or folder** section, browse and select the **source** container, then select **OK**.
- a. Browse and select the **source** container, then select **Choose**.
+ f. Under **File loading behavior**, select **Incremental load: time-partitioned folder/file names**.
- ![Screenshot shows the Choose input file or folder dialog box.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/choose-input-file-folder.png)
-
- b. Under **File loading behavior**, select **Incremental load: time-partitioned folder/file names**.
+ g. Write the dynamic folder path as **source/{year}/{month}/{day}/{hour}/**, and change the format as shown in the following screenshot.
+
+ h. Check **Binary copy** and select **Next**.
- c. Write the dynamic folder path as **source/{year}/{month}/{day}/{hour}/**, and change the format as shown in following screenshot. Check **Binary copy** and click **Next**.
+ :::image type="content" source="./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/source-data-store-page.png" alt-text="Screenshot that shows the configuration of Source data store page.":::
- ![Screenshot shows the Choose input file or folder dialog box with a folder selected.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/check-binary-copy.png)
-5. On the **Destination data store** page, select the **AzureBlobStorage**, which is the same storage account as data source store, and then click **Next**.
+4. On the **Destination data store** page, complete the following steps:
+ 1. Select the **AzureBlobStorage**, which is the same storage account as data source store.
- ![Destination data store page](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/destination-data-store-page-select-linkedservice.png)
-6. On the **Choose the output file or folder** page, do the following steps:
+ 1. Browse and select the **destination** folder, then select **OK**.
- a. Browse and select the **destination** folder, then click **Choose**.
+ 1. Write the dynamic folder path as **destination/{year}/{month}/{day}/{hour}/**, and change the format as shown in the following screenshot.
- b. Write the dynamic folder path as **destination/{year}/{month}/{day}/{hour}/**, and change the format as followings:
+ 1. Select **Next**.
- ![Screenshot shows the Choose output file or folder dialog box.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/output-file-name.png)
+ :::image type="content" source="./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/destination-data-store.png" alt-text="Screenshot that shows the configuration of Destination data store page.":::
- c. Click **Next**.
+5. On the **Settings** page, under **Task name**, enter **DeltaCopyFromBlobPipeline**, and then select **Next**. The Data Factory UI creates a pipeline with the specified task name.
- ![Screenshot shows the Choose output file or folder dialog box with Next selected.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/click-next-after-output-folder.png)
-7. On the **Settings** page, select **Next**.
+ :::image type="content" source="./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/settings-page.png" alt-text="Screenshot that shows the configuration of settings page.":::
-8. On the **Summary** page, review the settings, and then select **Next**.
+6. On the **Summary** page, review the settings, and then select **Next**.
![Summary page](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/summary-page.png)
-9. On the **Deployment page**, select **Monitor** to monitor the pipeline (task).
+7. On the **Deployment** page, select **Monitor** to monitor the pipeline (task).
![Deployment page](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/deployment-page.png)
-10. Notice that the **Monitor** tab on the left is automatically selected. You need wait for the pipeline run when it is triggered automatically (about after one hour). When it runs, click the pipeline name link **DeltaCopyFromBlobPipeline** to view activity run details or rerun the pipeline. Select **Refresh** to refresh the list.
+8. Notice that the **Monitor** tab on the left is automatically selected. You need wait for the pipeline run when it is triggered automatically (about after one hour). When it runs, select the pipeline name link **DeltaCopyFromBlobPipeline** to view activity run details or rerun the pipeline. Select **Refresh** to refresh the list.
![Screenshot shows the Pipeline runs pane.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/monitor-pipeline-runs-1.png)
-11. There's only one activity (copy activity) in the pipeline, so you see only one entry. Adjust the column width of the **source** and **destination** columns (if necessary) to display more details, you can see the source file (file1.txt) has been copied from *source/2020/03/17/03/* to *destination/2020/03/17/03/* with the same file name.
+9. There's only one activity (copy activity) in the pipeline, so you see only one entry. Adjust the column width of the **Source** and **Destination** columns (if necessary) to display more details, you can see the source file (file1.txt) has been copied from *source/2021/07/15/06/* to *destination/2021/07/15/06/* with the same file name.
![Screenshot shows pipeline run details.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/monitor-pipeline-runs2.png)
Prepare your Blob storage for the tutorial by performing these steps.
![Screenshot shows pipeline run details for the destination.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/monitor-pipeline-runs3.png)
-12. Create another empty text file with the new name as **file2.txt**. Upload the file2.txt file to the folder path **source/2020/03/17/04** in your storage account. You can use various tools to perform these tasks, such as [Azure Storage Explorer](https://storageexplorer.com/).
+10. Create another empty text file with the new name as **file2.txt**. Upload the file2.txt file to the folder path **source/2021/07/15/07** in your storage account. You can use various tools to perform these tasks, such as [Azure Storage Explorer](https://storageexplorer.com/).
> [!NOTE]
- > You might be aware that a new folder path is required to be created. Please adjust the folder name with your UTC time. For example, if the current UTC time is 4:20 AM on Mar. 17th, 2020, you can create the folder path as **source/2020/03/17/04/** by the rule of **{Year}/{Month}/{Day}/{Hour}/**.
+ > You might be aware that a new folder path is required to be created. Please adjust the folder name with your UTC time. For example, if the current UTC time is 7:30 AM on July. 15th, 2021, you can create the folder path as **source/2021/07/15/07/** by the rule of **{Year}/{Month}/{Day}/{Hour}/**.
-13. To go back to the **Pipeline Runs** view, select **All Pipelines runs**, and wait for the same pipeline being triggered again automatically after another one hour.
+11. To go back to the **Pipeline runs** view, select **All pipelines runs**, and wait for the same pipeline being triggered again automatically after another one hour.
![Screenshot shows the All pipeline runs link to return to that page.](./media/tutorial-incremental-copy-partitioned-file-name-copy-data-tool/monitor-pipeline-runs5.png)
-14. Select the new **DeltaCopyFromBlobPipeline** link for the second pipeline run when it comes, and do the same to review details. You will see the source file (file2.txt) has been copied from **source/2020/03/17/04/** to **destination/2020/03/17/04/** with the same file name. You can also verify the same by using Azure Storage Explorer (https://storageexplorer.com/) to scan the files in **destination** container.
+12. Select the new **DeltaCopyFromBlobPipeline** link for the second pipeline run when it comes, and do the same to review details. You will see the source file (file2.txt) has been copied from **source/2021/07/15/07/** to **destination/2021/07/15/07/** with the same file name. You can also verify the same by using Azure Storage Explorer (https://storageexplorer.com/) to scan the files in **destination** container.
## Next steps
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Title: Set up high availability description: Increase the resiliency of your Defender for IoT deployment by installing an on-premises management console high availability appliance. High availability deployments ensure your managed sensors continuously report to an active on-premises management console. Previously updated : 12/07/2020 Last updated : 07/11/2021 # About high availability
The installation and configuration procedures are performed in four main stages:
1. Install an on-premises management console primary appliance.
-2. Configure the on-premises management console primary appliance. For example, scheduled backup settings, VLAN settings. See the on-premises management console user guide for details. All settings are applied to the secondary appliance automatically after pairing.
+1. Configure the on-premises management console primary appliance. For example, scheduled backup settings, VLAN settings. See the on-premises management console user guide for details. All settings are applied to the secondary appliance automatically after pairing.
-3. Install an on-premises management console secondary appliance. For more information, see [About the Defender for IoT Installation](how-to-install-software.md).
+1. Install an on-premises management console secondary appliance. For more information, see [About the Defender for IoT Installation](how-to-install-software.md).
-4. Pair the primary and secondary on-premises management console appliances as described [here](https://infrascale.secure.force.com/pkb/articles/Support_Article/How-to-access-your-Appliance-Management-Console). The primary on-premises management console must manage at least two sensors in order to carry out the setup.
+1. Pair the primary and secondary on-premises management console appliances as described [here](https://infrascale.secure.force.com/pkb/articles/Support_Article/How-to-access-your-Appliance-Management-Console). The primary on-premises management console must manage at least two sensors in order to carry out the setup.
## High availability requirements
Verify that both the primary and secondary on-premises management console applia
### On the primary
-1. Sign in to the CLI as a Defender for IoT user.
+1. Sign in to the management console.
+
+1. Select **System Settings** from the side menu.
+
+1. Copy the Connection String.
+
+ :::image type="content" source="../media/how-to-set-up-high-availability/connection-string.png" alt-text="Copy the connection string to use in the following command.":::
-2. Run the following command on the primary:
+1. Run the following command on the primary:
-```azurecli-interactive
-sudo cyberx-management-trusted-hosts-add -ip <Secondary IP> -token <primary token>
-```
+ ```bash
+ sudo cyberx-management-trusted-hosts-add -ip <Secondary IP> -token <connection string>
+ ```
->[!NOTE]
->In this document, the principal on-premises management console is referred to as the primary, and the agent is referred to as the secondary.
+ >[!NOTE]
+ > In this document, the principal on-premises management console is referred to as the primary, and the agent is referred to as the secondary.
-3. Enter the IP address of the secondary appliance in the ```<Secondary ip>``` field and select Enter. The IP address is then validated, and the SSL certificate is downloaded to the primary. Entering the IP address also associates the sensors to the secondary appliance.
+1. Enter the IP address of the secondary appliance in the ```<Secondary ip>``` field and select Enter. The IP address is then validated, and the SSL certificate is downloaded to the primary. Entering the IP address also associates the sensors to the secondary appliance.
-4. Run the following command on the primary to verify that the certificate is installed properly:
+1. Run the following command on the primary to verify that the certificate is installed properly:
-```azurecli-interactive
-sudo cyberx-management-trusted-hosts-apply
-```
+ ```bash
+ sudo cyberx-management-trusted-hosts-apply
+ ```
-5. Run the following command on the primary. **Do not run with sudo.**
+1. Run the following command on the primary. **Do not run with sudo.**
-```azurecli-interactive
-cyberx-management-deploy-ssh-key <Secondary IP>
-```
+ ```bash
+ cyberx-management-deploy-ssh-key <Secondary IP>
+ ```
-This allows the connection between the primary and secondary appliances for backup and restoration purposes between them.
+ This allows the connection between the primary and secondary appliances for backup and restoration purposes between them.
-6. Enter the IP address of the secondary and select Enter.
+1. Enter the IP address of the secondary and select Enter.
### On the secondary 1. Sign in to the CLI as a Defender for IoT user.
-2. Run the following command on the secondary. **Do not run with sudo**:
+1. Run the following command on the secondary. **Do not run with sudo**:
-```azurecli-interactive
-cyberx-management-deploy-ssh-key <Primary ip>
-```
+ ```bash
+ cyberx-management-deploy-ssh-key <Primary ip>
+ ```
-This allows the connection between the Primary and Secondary appliances for backup and restore purposes between them.
+ This allows the connection between the Primary and Secondary appliances for backup and restore purposes between them.
-3. Enter the IP address of the primary and press Enter.
+1. Enter the IP address of the primary and press Enter.
### Track high availability activity The core application logs can be exported to the Defender for IoT support team to handle any high availability issues.
-To access the core logs:
+**To access the core logs**:
1. Select **Export** from the **System Settings** window.
To access the core logs:
Perform the high availability update in the following order. Make sure each step is complete before you begin a new step.
-To update with high availability:
+**To update with high availability**:
1. Update the primary on-premises management console.
-2. Update the secondary on-premises management console.
+1. Update the secondary on-premises management console.
-3. Update the sensors.
+1. Update the sensors.
## See also
-[Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
+[Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-time-series-insights.md
The solution described in this article will allow you to gather and analyze hist
## Prerequisites Before you can set up a relationship with Time Series Insights, you'll need to set up the following resources:
-* An **IoT hub**. For instructions, see the [Create an IoT Hub](../iot-hub/quickstart-send-telemetry-cli.md#create-an-iot-hub) section of the *IoT Hub's Send Telemetry* quickstart.
* An **Azure Digital Twins instance**. For instructions, see [How-to: Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md).
-* A **model and a twin in the Azure Digital Twins instance**. You'll need to update twin's information a few times to see that data tracked in Time Series Insights. For instructions, see the [Add a model and twin](how-to-ingest-iot-hub-data.md#add-a-model-and-twin) section of the *How to: Ingest IoT hub* article.
+* A **model and a twin in the Azure Digital Twins instance**. You'll need to update twin's information a few times to see that data tracked in Time Series Insights. For instructions, see the [Add a model and twin](how-to-ingest-iot-hub-data.md#add-a-model-and-twin) section of the *Ingest telemetry from IoT Hub* article.
> [!TIP] > In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you want to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [How to: Ingest IoT Hub data](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works.
Also, take note of the following values to use them later to create a Time Serie
In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects, containing only updated and added values from your twins.
-### Step 1: Create function app
+1. First, create a new function app project in Visual Studio. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project).
-First, create a new function app project in Visual Studio. For instructions on how to do this, see the [Create a function app in Visual Studio](how-to-create-azure-function.md#create-a-function-app-in-visual-studio) section of the *How-to: Set up a function for processing data* article.
+2. Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**.
-### Step 2: Add a new function
+ :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger.":::
-Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**.
+3. Add the following packages to your project:
+ * [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
+ * [Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
+ * [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/)
+4. Replace the code in the *ProcessDTUpdatetoTSI.cs* file with the following code:
-### Step 3: Fill in function code
+ :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateTSI.cs":::
-Add the following packages to your project:
-* [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
-* [Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/)
-* [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/)
+ Save your function code.
-Replace the code in the *ProcessDTUpdatetoTSI.cs* file with the following code:
--
-Save your function code.
-
-### Step 4: Publish the function app to Azure
-
-Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure.
-
-For instructions on how to do this, see the section [Publish the function app to Azure](how-to-create-azure-function.md#publish-the-function-app-to-azure) of the *How-to: Set up a function for processing data* article.
+5. Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
Save the function app name to use later to configure app settings for the two event hubs.
-### Step 5: Security access for the function app
+### Configure the function app
-Next, **assign an access role** for the function and **configure the application settings** so that it can access your Azure Digital Twins instance. For instructions on how to do this, see the section [Set up security access for the function app](how-to-create-azure-function.md#set-up-security-access-for-the-function-app) of the *How-to: Set up a function for processing data* article.
+Next, **assign an access role** for the function and **configure the application settings** so that it can access your resources.
-### Step 6: Configure app settings for the two event hubs
-Next, you'll add environment variables in the function app's settings that allow it to access the twins hub and time series hub.
+Next, add environment variables in the function app's settings that allow it to access the **twins hub** and **time series hub**.
-Use the twins hub **primaryConnectionString** value that you saved earlier to create an app setting in your function app that contains the twins hub connection string:
+Use the **twins hub primaryConnectionString** value that you saved earlier to create an app setting in your function app that contains the twins hub connection string:
```azurecli-interactive az functionapp config appsettings set --settings "EventHubAppSetting-Twins=<your-twins-hub-primaryConnectionString>" --resource-group <your-resource-group> --name <your-App-Service-function-app-name> ```
-Use the time series hub **primaryConnectionString** value that you saved earlier to create an app setting in your function app that contains the time series hub connection string:
+Use the **time series hub primaryConnectionString** value that you saved earlier to create an app setting in your function app that contains the time series hub connection string:
```azurecli-interactive az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-time-series-hub-primaryConnectionString>" --resource-group <your-resource-group> --name <your-App-Service-function-app-name>
az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-t
In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more details about this process, see [Tutorial: Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a time series insights environment.
-1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Add** button. Choose the following options to create the time series environment.
+1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Create** button. Choose the following options to create the time series environment.
* **Subscription** - Choose your subscription. - **Resource group** - Choose your resource group.
In this section, you'll set up Time Series Insights instance to receive data fro
* **Subscription** - Choose your Azure subscription. * **Event Hub namespace** - Choose the namespace that you created earlier in this article. * **Event Hub name** - Choose the **time series hub** name that you created earlier in this article.
- * **Event Hub access policy name** - Choose the *time series hub auth rule* that you created earlier in this article.
+ * **Event Hub access policy name** - Choose the **time series hub auth rule** that you created earlier in this article.
* **Event Hub consumer group** - Select *New* and specify a name for your event hub consumer group. Then, select *Add*. * **Property name** - Leave this field blank.
In this section, you'll set up Time Series Insights instance to receive data fro
To begin sending data to Time Series Insights, you'll need to start updating the digital twin properties in Azure Digital Twins with changing data values.
-Use the following CLI command to update the *Temperature* property on the thermostat67 twin that you added to your instance in the [Prerequisites section](#prerequisites).
+Use the [az dt twin update](/cli/azure/dt/twin?view=azure-cli-latest&preserve-view=true#az_dt_twin_update) CLI command to update a property on the twin you added in the [Prerequisites](#prerequisites) section. If you used the twin creation instructions from [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)), you can use the following command in the local CLI or the Cloud Shell **bash** terminal to update the temperature property on the thermostat67 twin.
```azurecli-interactive az dt twin update --dt-name <your-Azure-Digital-Twins-instance-name> --twin-id thermostat67 --json-patch '{"op":"replace", "path":"/Temperature", "value": 20.5}' ```
-**Repeat the command at least 4 more times with different temperature values**, to create several data points that can be observed later in the Time Series Insights environment.
+**Repeat the command at least 4 more times with different property values**, to create several data points that can be observed later in the Time Series Insights environment.
> [!TIP] > If you want to complete this article with live simulated data instead of manually updating the digital twin values, first make sure you've completed the TIP from the [Prerequisites](#prerequisites) section to set up an Azure function that updates twins from a simulated device.
Now, data should be flowing into your Time Series Insights instance, ready to be
:::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Screenshot of the Azure portal showing the Time Series Insights explorer URL in the overview tab of the Time Series Insights environment." lightbox="media/how-to-integrate-time-series-insights/view-environment.png":::
-2. In the explorer, you will see the twins in the Azure Digital Twins instance shown on the left. Select the thermostat67 twin, choose the property *Temperature*, and select **Add**.
+2. In the explorer, you will see the twins in the Azure Digital Twins instance shown on the left. Select the twin you've edited properties for, choose the property you've changed, and select **Add**.
:::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Screenshot of the Time Series Insights explorer with the steps to select thermostat67, select the property temperature, and select add highlighted." lightbox="media/how-to-integrate-time-series-insights/add-data.png":::
-3. You should now see the initial temperature readings from your thermostat, as shown below.
+3. You should now see the property changes you made reflected in the graph, as shown below.
:::image type="content" source="media/how-to-integrate-time-series-insights/initial-data.png" alt-text="Screenshot of the Time Series Insights explorer with the initial temperature data, showing a line of random values between 68 and 85." lightbox="media/how-to-integrate-time-series-insights/initial-data.png":::
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Azure Firewall Premium uses Firewall Policy, a global resource that can be used to centrally manage your firewalls using Azure Firewall Manager. Starting this release, all new features are configurable via Firewall Policy only. Firewall Rules (classic) continue to be supported and can be used to configure existing Standard Firewall features. Firewall Policy can be managed independently or with Azure Firewall Manager. A firewall policy associated with a single firewall has no charge.
-> [!IMPORTANT]
-> Currently the Firewall Premium SKU is not supported in Secure Hub deployments and forced tunnel configurations.
- Azure Firewall Premium includes the following features: - **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination.
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/query-language.md
Title: Understand the query language description: Describes Resource Graph tables and the available Kusto data types, operators, and functions usable with Azure Resource Graph. Previously updated : 06/29/2021 Last updated : 07/20/2021 # Understanding the Azure Resource Graph query language
properties from related resource types. Here is the list of tables available in
|MaintenanceResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.Maintenance`. | |PatchAssessmentResources|No |Includes resources _related_ to Azure Virtual Machines patch assessment. | |PatchInstallationResources|No |Includes resources _related_ to Azure Virtual Machines patch installation. |
-|PolicyResources |No |Includes resources _related_ to `Microsoft.PolicyInsights`. (**Preview**) |
+|PolicyResources |Yes |Includes resources _related_ to `Microsoft.PolicyInsights`. |
|RecoveryServicesResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.DataProtection` and `Microsoft.RecoveryServices`. | |SecurityResources |Yes (preview) |Includes resources _related_ to `Microsoft.Security`. | |ServiceHealthResources |No (preview) |Includes resources _related_ to `Microsoft.ResourceHealth`. |
governance First Query Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-azurecli.md
Docker image](https://hub.docker.com/_/microsoft-azure-cli), or locally installe
## Run your first Resource Graph query With the Azure CLI extension added to your environment of choice, it's time to try out a simple
-Resource Graph query. The query will return the first five Azure resources with the **Name** and
-**Resource Type** of each resource.
+tenant-based Resource Graph query. The query returns the first five Azure resources with the
+**Name** and **Resource Type** of each resource. To query by
+[management group](../management-groups/overview.md) or subscription, use the `--managementgroups`
+or `--subscriptions` arguments.
1. Run your first Azure Resource Graph query using the `graph` extension and `query` command:
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-dotnet.md
Title: "Quickstart: Your first .NET Core query" description: In this quickstart, you follow the steps to enable the Resource Graph NuGet packages for .NET Core and run your first query. Previously updated : 05/01/2021 Last updated : 07/09/2021
required packages.
string strTenant = args[0]; string strClientId = args[1]; string strClientSecret = args[2];
- string strSubscriptionId = args[3];
- string strQuery = args[4];
+ string strQuery = args[3];
AuthenticationContext authContext = new AuthenticationContext("https://login.microsoftonline.com/" + strTenant); AuthenticationResult authResult = await authContext.AcquireTokenAsync("https://management.core.windows.net", new ClientCredential(strClientId, strClientSecret));
required packages.
ResourceGraphClient argClient = new ResourceGraphClient(serviceClientCreds); QueryRequest request = new QueryRequest();
- request.Subscriptions = new List<string>(){ strSubscriptionId };
request.Query = strQuery; QueryResponse response = argClient.Resources(request);
required packages.
} ```
+ > [!NOTE]
+ > This code creates a tenant-based query. To limit the query to a
+ > [management group](../management-groups/overview.md) or subscription, set the
+ > `ManagementGroups` or `Subscriptions` property on the `QueryRequest` object.
+ 1. Build and publish the `argQuery` console application: ```dotnetcli
required packages.
## Run your first Resource Graph query
-With the .NET Core console application built and published, it's time to try out a simple Resource
-Graph query. The query returns the first five Azure resources with the **Name** and **Resource
-Type** of each resource.
+With the .NET Core console application built and published, it's time to try out a simple
+tenant-based Resource Graph query. The query returns the first five Azure resources with the
+**Name** and **Resource Type** of each resource.
In each call to `argQuery`, there are variables that are used that you need to replace with your own values:
values:
- `{tenantId}` - Replace with your tenant ID - `{clientId}` - Replace with the client ID of your service principal - `{clientSecret}` - Replace with the client secret of your service principal-- `{subscriptionId}` - Replace with your subscription ID 1. Change directories to the `{run-folder}` you defined with the previous `dotnet publish` command. 1. Run your first Azure Resource Graph query using the compiled .NET Core console application: ```bash
- argQuery "{tenantId}" "{clientId}" "{clientSecret}" "{subscriptionId}" "Resources | project name, type | limit 5"
+ argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | limit 5"
``` > [!NOTE]
values:
property: ```bash
- argQuery "{tenantId}" "{clientId}" "{clientSecret}" "{subscriptionId}" "Resources | project name, type | limit 5 | order by name asc"
+ argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | limit 5 | order by name asc"
``` > [!NOTE]
values:
**Name** property and then `limit` to the top five results: ```bash
- argQuery "{tenantId}" "{clientId}" "{clientSecret}" "{subscriptionId}" "Resources | project name, type | order by name asc | limit 5"
+ argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | order by name asc | limit 5"
``` When the final query is run several times, assuming that nothing in your environment is changing,
governance First Query Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-go.md
Title: "Quickstart: Your first Go query" description: In this quickstart, you follow the steps to enable the Resource Graph package for Go and run your first query. Previously updated : 05/01/2021 Last updated : 07/09/2021 # Quickstart: Run your first Resource Graph query using Go
Go can be used, including [bash on Windows 10](/windows/wsl/install-win10) or lo
```bash # Add the Resource Graph package for Go
- go get -u github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2019-04-01/resourcegraph
+ go get -u github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2021-03-01/resourcegraph
# Add the Azure auth package for Go go get -u github.com/Azure/go-autorest/autorest/azure/auth
Type** of each resource.
"os" "context" "strconv"
- arg "github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2019-04-01/resourcegraph"
+ arg "github.com/Azure/azure-sdk-for-go/services/resourcegraph/mgmt/2021-03-01/resourcegraph"
"github.com/Azure/go-autorest/autorest/azure/auth" )
governance First Query Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-java.md
Title: "Quickstart: Your first Java query" description: In this quickstart, you follow the steps to enable the Resource Graph Maven packages for Java and run your first query. Previously updated : 03/30/2021 Last updated : 07/09/2021
install the required Maven packages.
<dependency> <groupId>com.azure.resourcemanager</groupId> <artifactId>azure-resourcemanager-resourcegraph</artifactId>
- <version>1.0.0-beta.1</version>
+ <version>1.0.0</version>
</dependency> ```
governance First Query Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-javascript.md
Title: 'Quickstart: Your first JavaScript query' description: In this quickstart, you follow the steps to enable the Resource Graph library for JavaScript and run your first query. Previously updated : 05/01/2021 Last updated : 07/09/2021 - devx-track-js
works wherever JavaScript can be used, including [bash on Windows 10](/windows/w
const authenticator = require("@azure/ms-rest-nodeauth"); const resourceGraph = require("@azure/arm-resourcegraph");
- if (argv.query && argv.subs) {
- const subscriptionList = argv.subs.split(",");
-
+ if (argv.query) {
const query = async () => { const credentials = await authenticator.interactiveLogin(); const client = new resourceGraph.ResourceGraphClient(credentials); const result = await client.resources( {
- query: argv.query,
- subscriptions: subscriptionList,
+ query: argv.query
}, { resultFormat: "table" } );
works wherever JavaScript can be used, including [bash on Windows 10](/windows/w
} ```
+ > [!NOTE]
+ > This code creates a tenant-based query. To limit the query to a
+ > [management group](../management-groups/overview.md) or subscription, define and add a
+ > [queryrequest](/javascript/api/@azure/arm-resourcegraph/queryrequest) to the `client.resources`
+ > call and specify either `managementGroups` or `subscriptions`.
+ 1. Enter the following command in the terminal: ```bash
- node index.js --query "Resources | project name, type | limit 5" --subs <YOUR_SUBSCRIPTION_ID_LIST>
+ node index.js --query "Resources | project name, type | limit 5"
```
- Make sure to replace the `<YOUR_SUBSCRIPTION_ID_LIST>` placeholder with your comma-separated list
- of Azure subscription IDs.
- > [!NOTE] > As this query example doesn't provide a sort modifier such as `order by`, running this query > multiple times is likely to yield a different set of resources per request. 1. Change the first parameter to `index.js` and change the query to `order by` the **Name**
- property. Replace `<YOUR_SUBSCRIPTION_ID_LIST>` with your subscription ID:
+ property.
```bash
- node index.js --query "Resources | project name, type | limit 5 | order by name asc" --subs "<YOUR_SUBSCRIPTION_ID_LIST>"
+ node index.js --query "Resources | project name, type | limit 5 | order by name asc"
``` As the script attempts to authenticate, a message similar to the following message is displayed
works wherever JavaScript can be used, including [bash on Windows 10](/windows/w
> then orders them. 1. Change the first parameter to `index.js` and change the query to first `order by` the **Name**
- property and then `limit` to the top five results. Replace `<YOUR_SUBSCRIPTION_ID_LIST>` with
- your subscription ID:
+ property and then `limit` to the top five results.
```bash
- node index.js --query "Resources | project name, type | order by name asc | limit 5" --subs "<YOUR_SUBSCRIPTION_ID_LIST>"
+ node index.js --query "Resources | project name, type | order by name asc | limit 5"
``` When the final query is run several times, assuming that nothing in your environment is changing,
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-powershell.md
Title: 'Quickstart: Your first PowerShell query' description: In this quickstart, you follow the steps to enable the Resource Graph module for Azure PowerShell and run your first query. Previously updated : 05/11/2021 Last updated : 07/09/2021 - mode-api
The Resource Graph module for PowerShell is **Az.ResourceGraph**.
Install-Module -Name Az.ResourceGraph ```
-1. Validate that the module has been imported and is at least version `0.10.0`:
+1. Validate that the module has been imported and is at least version `0.11.0`:
```azurepowershell-interactive # Get a list of commands for the imported Az.ResourceGraph module
The Resource Graph module for PowerShell is **Az.ResourceGraph**.
## Run your first Resource Graph query With the Azure PowerShell module added to your environment of choice, it's time to try out a simple
-Resource Graph query. The query returns the first five Azure resources with the **Name** and
-**Resource Type** of each resource.
+tenant-based Resource Graph query. The query returns the first five Azure resources with the
+**Name** and **Resource Type** of each resource. To query by
+[management group](../management-groups/overview.md) or subscription, use the `-ManagementGroup`
+or `-Subscription` parameters.
1. Run your first Azure Resource Graph query using the `Search-AzGraph` cmdlet:
Resource Graph query. The query returns the first five Azure resources with the
# Login first with Connect-AzAccount if not using Cloud Shell # Run Azure Resource Graph query
- (Search-AzGraph -Query 'Resources | project name, type | limit 5').Data
+ Search-AzGraph -Query 'Resources | project name, type | limit 5'
``` > [!NOTE]
Resource Graph query. The query returns the first five Azure resources with the
```azurepowershell-interactive # Run Azure Resource Graph query with 'order by'
- (Search-AzGraph -Query 'Resources | project name, type | limit 5 | order by name asc').Data
+ Search-AzGraph -Query 'Resources | project name, type | limit 5 | order by name asc'
``` > [!NOTE]
Resource Graph query. The query returns the first five Azure resources with the
```azurepowershell-interactive # Run Azure Resource Graph query with `order by` first, then with `limit`
- (Search-AzGraph -Query 'Resources | project name, type | order by name asc | limit 5').Data
+ Search-AzGraph -Query 'Resources | project name, type | order by name asc | limit 5'
``` When the final query is run several times, assuming that nothing in your environment is changing,
governance First Query Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-python.md
Title: 'Quickstart: Your first Python query' description: In this quickstart, you follow the steps to enable the Resource Graph library for Python and run your first query. Previously updated : 05/01/2021 Last updated : 07/09/2021 - devx-track-python
installed.
## Run your first Resource Graph query With the Python libraries added to your environment of choice, it's time to try out a simple
-Resource Graph query. The query returns the first five Azure resources with the **Name** and
-**Resource Type** of each resource.
+subscription-based Resource Graph query. The query returns the first five Azure resources with the
+**Name** and **Resource Type** of each resource. To query by
+[management group](../management-groups/overview.md), use the `management_groups` parameter with
+`QueryRequest`.
1. Run your first Azure Resource Graph query using the installed libraries and the `resources` method:
governance First Query Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-rest-api.md
Title: "Quickstart: Your first REST API query" description: In this quickstart, you follow the steps to call the Resource Graph endpoint for REST API and run your first query. Previously updated : 05/01/2021 Last updated : 07/09/2021 # Quickstart: Run your first Resource Graph query using REST API
parameter of `Invoke-RestMethod`.
## Run your first Resource Graph query With the REST API tools added to your environment of choice, it's time to try out a simple
-Resource Graph query. The query returns the first five Azure resources with the **Name** and
-**Resource Type** of each resource.
+subscription-based Resource Graph query. The query returns the first five Azure resources with the
+**Name** and **Resource Type** of each resource. To query by
+[management group](../management-groups/overview.md), use `managementgroups` instead of
+`subscriptions`. To query the entire tenant, omit both the `managementgroups` and `subscriptions`
+properties from the request body.
In the request body of each REST API call, there's a variable that is used that you need to replace with your own value:
with your own value:
- REST API URI ```http
- POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01
+ POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01
``` - Request Body
with your own value:
- REST API URI ```http
- POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01
+ POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01
``` - Request Body
with your own value:
- REST API URI ```http
- POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2019-04-01
+ POST https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01
``` - Request Body
governance First Query Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-ruby.md
Title: "Quickstart: Your first Ruby query" description: In this quickstart, you follow the steps to enable the Resource Graph gem for Ruby and run your first query. Previously updated : 05/01/2021 Last updated : 07/09/2021 # Quickstart: Run your first Resource Graph query using Ruby
industrial-iot Industrial Iot Platform Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/industrial-iot-platform-versions.md
We are pleased to announce the declaration of Long-Term Support (LTS) for versio
|Version |Type |Date |Highlights | |-|--|-|| |2.5.4 |Stable |March 2020 |IoT Hub Direct Method Interface, control from cloud without any additional microservices (standalone mode), OPC UA Server Interface, uses OPC Foundation's OPC stack - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.5.4)|
-|2.7.206 |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format as well as PubSub - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)|
-|2.8.0 |Long-term support (LTS)|July 2021 |IoT Edge update to 1.1 LTS, OPC stack logging and tracing for better OPC Publisher diagnostics, Security fixes|
+|[2.7.206](https://github.com/Azure/Industrial-IoT/tree/release/2.7.206) |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format as well as PubSub - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)|
+|[2.8.0](https://github.com/Azure/Industrial-IoT/tree/release/2.8) |Long-term support (LTS)|July 2021 |IoT Edge update to 1.1 LTS, OPC stack logging and tracing for better OPC Publisher diagnostics, Security fixes|
## Next steps
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-architecture.md
IoT Central classifies IoT Edge device types as follows:
![IoT Central with IoT Edge Overview](./media/concepts-architecture/gatewayedge.png)
+> [!NOTE]
+> IoT Central currently doesn't support connecting an IoT Edge device as a downstream device to an IoT Edge gateway. This is because all devices that connect to IoT Central are provisioned using the Device Provisioning Service (DPS) and DPS doesn't support nested IoT Edge scenarios.
+ ### IoT Edge patterns IoT Central supports the following IoT Edge device patterns:
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
There are two gateway patterns:
* In the *transparent gateway* pattern, the IoT Edge hub module behaves like IoT Central and handles connections from devices registered in IoT Central. Messages pass from downstream devices to IoT Central as if there's no gateway between them.
+ > [!NOTE]
+ > IoT Central currently doesn't support connecting an IoT Edge device as a downstream device to an IoT Edge transparent gateway. This is because all devices that connect to IoT Central are provisioned using the Device Provisioning Service (DPS) and DPS doesn't support nested IoT Edge scenarios.
+ * In the *translation gateway* pattern, devices that can't connect to IoT Central on their own, connect to a custom IoT Edge module instead. The module in the IoT Edge device processes incoming downstream device messages and then forwards them to IoT Central. The transparent and translation gateway patterns aren't mutually exclusive. A single IoT Edge device can function as both a transparent gateway and a translation gateway.
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-authorize-rest-api.md
This article describes the types of token you can use in the authorization heade
## Token types
+You'll want to use the user bearer token when you're doing some automation/testing/API calls yourself; you'll want to use the SPN bearer token when you're automating/scripting your development environment (i.e. devops). The API token can be used for both cases, but has the risk of expiry and leaks, so we recommend using bearer whenever possible. Does that make sense?
+ To access an IoT Central application using the REST API, you can use an: -- _Azure Active Directory bearer token_. A bearer token is associated with an Azure Active Directory user account. The token grants the caller the same permissions the user has in the IoT Central application.
+- _Azure Active Directory bearer token_. A bearer token is associated with an Azure Active Directory user account or service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application.
- IoT Central API token. An API token is associated with a role in your IoT Central application. To learn more about users and roles in IoT Central, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
The JSON output from the previous command looks like the following example:
The bearer token is valid for approximately one hour, after which you need to create a new one.
+To get a bearer token for a service principal, see [Service principal authentication](/rest/api/iotcentral/authentication#service-principal-authentication).
+ ## Get an API token To get an API token, you can use the IoT Central UI or a REST API call.
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
The response to this request looks like the following example. The role value id
} ```
+You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/1.0/users/create#add-or-update-a-service-principal-user).
+ ### Change the role of a user Use the following request to change the role assigned to user. This example uses the ID of the builder role you retrieved previously:
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/troubleshoot-connection.md
Use the following commands to sign in the subscription where you have your IoT C
```azurecli az login
-az set account --subscription <your-subscription-id>
+az account set --subscription <your-subscription-id>
``` To monitor the telemetry your device is sending, use the following command:
iot-develop Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/howto-convert-to-pnp.md
If your device uses DPS to connect, include the model ID in the payload you send
} ```
-To learn more, see [Runtime Registration - Register Device](/rest/api/iot-dps/runtimeregistration/registerdevice).
+To learn more, see [Runtime Registration - Register Device](/rest/api/iot-dps/device/runtime-registration/register-device).
If your device uses DPS to connect or connects directly with a connection string, include the model ID when your code connects to IoT Hub. For example:
iot-dps Iot Dps Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/iot-dps-customer-data-requests.md
For more information, see [How to manage device enrollments](how-to-manage-enrol
It is also possible to perform delete operations for enrollments and registration records using REST APIs:
-* To delete enrollment information for a single device, you can use [Device Enrollment - Delete](/rest/api/iot-dps/deleteindividualenrollment/deleteindividualenrollment).
-* To delete enrollment information for a group of devices, you can use [Device Enrollment Group - Delete](/rest/api/iot-dps/deleteenrollmentgroup/deleteenrollmentgroup).
-* To delete information about devices that have been provisioned, you can use [Registration State - Delete Registration State](/rest/api/iot-dps/deletedeviceregistrationstate/deletedeviceregistrationstate).
+* To delete enrollment information for a single device, you can use [Device Enrollment - Delete](/rest/api/iot-dps/service/individual-enrollment/delete).
+* To delete enrollment information for a group of devices, you can use [Device Enrollment Group - Delete](/rest/api/iot-dps/service/enrollment-group/delete).
+* To delete information about devices that have been provisioned, you can use [Registration State - Delete Registration State](/rest/api/iot-dps/service/device-registration-state/delete).
## Exporting customer data
For more information on how to manage enrollments, see [How to manage device enr
It is also possible to perform export operations for enrollments and registration records using REST APIs:
-* To export enrollment information for a single device, you can use [Device Enrollment - Get](/rest/api/iot-dps/getindividualenrollment/getindividualenrollment).
-* To export enrollment information for a group of devices, you can use [Device Enrollment Group - Get](/rest/api/iot-dps/getenrollmentgroup/getenrollmentgroup).
-* To export information about devices that have already been provisioned, you can use [Registration State - Get Registration State](/rest/api/iot-dps/getdeviceregistrationstate/getdeviceregistrationstate).
+* To export enrollment information for a single device, you can use [Device Enrollment - Get](/rest/api/iot-dps/service/individual-enrollment/get).
+* To export enrollment information for a group of devices, you can use [Device Enrollment Group - Get](/rest/api/iot-dps/service/enrollment-group/get).
+* To export information about devices that have already been provisioned, you can use [Registration State - Get Registration State](/rest/api/iot-dps/service/device-registration-state/get).
> [!NOTE] > When you use Microsoft's enterprise services, Microsoft generates some information, known as system-generated logs. Some Device Provisioning Service system-generated logs are not accessible or exportable by tenant administrators. These logs constitute factual actions conducted within the service and diagnostic data related to individual devices.
iot-hub Iot Hub Csharp Csharp File Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-csharp-csharp-file-upload.md
These files are typically batch processed in the cloud using tools such as [Azur
## Examine the application
-Navigate to the *FileUploadSample* folder in your .NET samples download. Open the folder in Visual Studio Code. The folder contains a file named *parameters.cs*. If you open that file, you'll see that the parameter *p* is required and contains the device connection string. You copied and saved this connection string when you registered the device. The parameter *t* can be specified if you want to change the transport protocol. The default protocol is mqtt. The file *program.cs* contains the *main* function. The *FileUploadSample.cs* file contains the primary sample logic. *TestPayload.txt* is the file to be uploaded to your blob container.
+In Visual Studio Code, open the *azure-iot-samples-csharp-master\iot-hub\Samples\device\FileUploadSample* folder in your .NET samples download. The folder contains a file named *parameters.cs*. If you open that file, you'll see that the parameter *p* is required and contains the device connection string. You copied and saved this connection string when you registered the device. The parameter *t* can be specified if you want to change the transport protocol. The default protocol is mqtt. The file *program.cs* contains the *main* function. The *FileUploadSample.cs* file contains the primary sample logic. *TestPayload.txt* is the file to be uploaded to your blob container.
## Run the application
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/quick-create-powershell.md
 Title: Quickstart - Set & retrieve a secret from Key Vault using PowerShell"
+ Title: Quickstart - Set & retrieve a secret from Key Vault using PowerShell
description: In this quickstart, learn how to create, retrieve, and delete secrets from an Azure Key Vault using Azure PowerShell.
lighthouse Cross Tenant Management Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cross-tenant-management-experience.md
Title: Cross-tenant management experiences description: Azure Lighthouse enables and enhances cross-tenant experiences in many Azure services. Previously updated : 05/11/2021 Last updated : 07/20/2021
Most tasks and services can be performed on delegated resources across managed t
- View alerts for delegated subscriptions, with the ability to view and refresh alerts across all subscriptions - View activity log details for delegated subscriptions - [Log analytics](../../azure-monitor/logs/service-providers.md): Query data from remote workspaces in multiple tenants (note that automation accounts used to access data from workspaces in customer tenants must be created in the same tenant)-- [Create, view, and manage activity log alerts](../../azure-monitor/alerts/alerts-activity-log.md) in customer tenants
+- Create, view, and manage [metric alerts](../../azure-monitor/alerts/alerts-metric.md), [log alerts](../../azure-monitor/alerts/alerts-log.md), and [activity log alerts](../../azure-monitor/alerts/alerts-activity-log.md) in customer tenants
- Create alerts in customer tenants that trigger automation, such as Azure Automation runbooks or Azure Functions, in the managing tenant through webhooks - Create [diagnostic settings](../..//azure-monitor/essentials/diagnostic-settings.md) in customer tenants to send resource logs to workspaces in the managing tenant - For SAP workloads, [monitor SAP Solutions metrics with an aggregated view across customer tenants](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-lighthouse-and-azure-monitor-for-sap-solutions-to/ba-p/1537293)
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
If you want to run your logic app only at one time in the future, you can use th
Or, if you can start your logic app with the **When a HTTP request is received - Request** trigger, and pass the start time as a parameter for the trigger. For the first action, use the **Delay until - Schedule** action, and provide the time for when the next action starts running.
+<a name="run-once-last-day-of-the-month"></a>
+
+## Run once at last day of the month
+
+To run the Recurrence trigger only once on the last day of the month, you have to edit the trigger in the workflow's underlying JSON definition using code view, not the designer. However, you can use the following example:
+
+```json
+"triggers": {
+ "Recurrence": {
+ "recurrence": {
+ "frequency": "Month",
+ "interval": 1,
+ "schedule": {
+ "monthDays": [-1]
+ }
+ },
+ "type": "Recurrence"
+ }
+}
+```
+ <a name="example-recurrences"></a> ## Example recurrences
logic-apps Logic Apps Create Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-create-azure-resource-manager-templates.md
Title: Create logic app templates for deployment
description: Learn how to create Azure Resource Manager templates for automating deployment in Azure Logic Apps ms.suite: integration-- Previously updated : 07/26/2019++ Last updated : 07/20/2021 # Create Azure Resource Manager templates to automate deployment for Azure Logic Apps
For example, suppose you have a logic app that receives a message from an Azure
These samples show how to create and deploy logic apps by using Azure Resource Manager templates, Azure Pipelines in Azure DevOps, and Azure PowerShell:
-* [Sample: Connect to Azure Service Bus queues from Azure Logic Apps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-azure-service-bus-queues-from-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Connect to Azure Storage accounts from Azure Logic Apps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-azure-storage-accounts-from-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Set up a function app action for Azure Logic Apps](/samples/azure-samples/azure-logic-apps-deployment-samples/set-up-an-azure-function-app-action-for-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Connect to an integration account from Azure Logic Apps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-an-integration-account-from-azure-logic-apps-and-deploy-by-using-azure-devops-pipelines/)
+* [Sample: Orchestrate Azure Pipelines by using Azure Logic Apps](https://github.com/Azure-Samples/azure-logic-apps-pipeline-orchestration)
+* [Sample: Connect to Azure Storage accounts from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/storage-account-connections)
+* [Sample: Connect to Azure Service Bus queues from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/service-bus-connections)
+* [Sample: Set up an Azure Functions action for Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/function-app-actions)
+* [Sample: Connect to an integration account from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/integration-account-connections)
### Install PowerShell modules
logic-apps Logic Apps Deploy Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-deploy-azure-resource-manager-templates.md
Title: Deploy logic app templates
description: Learn how to deploy Azure Resource Manager templates created for Azure Logic Apps ms.suite: integration-- Previously updated : 08/25/2020 ++ Last updated : 07/20/2021
For more information about continuous integration and continuous deployment (CI/
* [Integrate Resource Manager templates with Azure Pipelines](../azure-resource-manager/templates/add-template-to-azure-pipelines.md) * [Tutorial: Continuous integration of Azure Resource Manager templates with Azure Pipelines](../azure-resource-manager/templates/deployment-tutorial-pipeline.md)
-* [Sample: Connect to Azure Service Bus queues from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-azure-service-bus-queues-from-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Connect to Azure Storage accounts from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-azure-storage-accounts-from-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Set up a function app action for Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/set-up-an-azure-function-app-action-for-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Connect to an integration account from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-an-integration-account-from-azure-logic-apps-and-deploy-by-using-azure-devops-pipelines/)
-* [Sample: Orchestrate Azure Pipelines by using Azure Logic Apps](/samples/azure-samples/azure-logic-apps-pipeline-orchestration/azure-devops-orchestration-with-logic-apps/)
+* [Sample: Orchestrate Azure Pipelines by using Azure Logic Apps](https://github.com/Azure-Samples/azure-logic-apps-pipeline-orchestration)
+* [Sample: Connect to Azure Storage accounts from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/storage-account-connections)
+* [Sample: Connect to Azure Service Bus queues from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/service-bus-connections)
+* [Sample: Set up an Azure Functions action for Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/function-app-actions)
+* [Sample: Connect to an integration account from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/integration-account-connections)
Here are the general high-level steps for using Azure Pipelines:
logic-apps Logic Apps Enterprise Integration As2 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-as2-message-settings.md
Previously updated : 04/22/2019 Last updated : 07/20/2021 # Reference for AS2 message settings in Azure Logic Apps with Enterprise Integration Pack
properties based on your agreement with the partner that exchanges messages with
| Property | Required | Description | |-|-|-| | **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <p>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <p>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner private certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <p>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
| **Enable message compression** | No | Specifies whether all outgoing messages must be compressed. | | **Unfold HTTP headers** | No | Puts the HTTP `content-type` header onto a single line. | | **Transmit file name in MIME header** | No | Specifies whether to include the file name in the MIME header. |
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-maps.md
ms.suite: integration -- Previously updated : 07/13/2021++ Last updated : 07/20/2021 # Transform XML with maps in Azure Logic Apps with Enterprise Integration Pack To transfer XML data between formats for enterprise integration scenarios in Azure Logic Apps, your logic app can use maps, or more specifically,
-Extensible Style sheet Language Transformations (XSLT) maps. A map is an XML
+Extensible Stylesheet Language Transformation (XSLT) maps. A map is an XML
document that describes how to convert data from an XML document into another format. For example, suppose you regularly receive B2B orders or invoices from
uses the MMDDYYY date format. You can define and use a map that transforms
the YYYMMDD date format to the MMDDYYY format before storing the order or invoice details in your customer activity database.
-For limits related to integration accounts and artifacts such as maps,
-see [Limits and configuration information for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
+> [!NOTE]
+> The Azure Logic Apps service allocates finite memory for processing XML transformations. If you
+> create logic apps based on the **Logic App (Consumption)** resource type, and your map or payload
+> transformations have high memory consumption, such transformations might fail, resulting in out
+> of memory errors. To avoid this scenario, consider these options:
+>
+> * Edit your maps or payloads to reduce memory consumption.
+>
+> * Create your logic apps using the **Logic App (Standard)** resource type instead.
+>
+> These workflows run in single-tenant Azure Logic Apps, which offers dedicated and flexible options
+> for compute and memory resources. For more information, review the following documentation:
+>
+> * [What is Azure Logic Apps - Resource type and host environments](logic-apps-overview.md#resource-type-and-host-environment-differences)
+> * [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+> * [Usage metering, billing, and pricing models for Azure Logic Apps](logic-apps-pricing.md)
+
+For limits related to integration accounts and artifacts such as maps, review [Limits and configuration information for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
## Prerequisites
see [Limits and configuration information for Azure Logic Apps](../logic-apps/lo
where you store your maps and other artifacts for enterprise integration and business-to-business (B2B) solutions.
-* If your map references an external assembly, you need a 64-bit assembly. The transform service runs a 64-bit process, so 32-bit assemblies aren't supported. If you have the source code for a 32-bit assembly, recompile the code into a 64-bit assembly. If you don't have the source code, but you obtained the binary from a third-party provider, get the 64-bit version from that provider. For example, some vendors provide assemblies in packages that have both 32- and 64-bit versions. If you have the option, use the 64-bit version instead.
+* If your map references an external assembly, you need a 64-bit assembly. The transform service runs a 64-bit process, so 32-bit assemblies aren't supported. If you have the source code for a 32-bit assembly, recompile the code into a 64-bit assembly. If you don't have the source code, but you obtained the binary from a third-party provider, get the 64-bit version from that provider. For example, some vendors provide assemblies in packages that have both 32-bit and 64-bit versions. If you have the option, use the 64-bit version instead.
* If your map references an external assembly, you have to upload *both the assembly and the map* to your integration account.
map that references the assembly.
||-| | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your assembly. Learn [how to create a storage account](../storage/common/storage-account-create.md). | | Blob container | In this container, you can upload your assembly. You also need this container's location when you add the assembly to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
- | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, either [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md). <p>Or, in the Azure portal, find and select your storage account. From your storage account menu, select **Storage Explorer**. |
+ | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, either [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md). <p>Or, in the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. |
||| * For maps, you can currently add larger maps by using the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate).
shows the number of uploaded assemblies.
## Create maps
-To create an XSLT document you can use as a map,
+To create an Extensible Stylesheet Language Transformation (XSLT) document you can use as a map,
you can use Visual Studio 2015 for creating a BizTalk Integration project by using the [Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
logic-apps Logic Apps Examples And Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-examples-and-scenarios.md
Title: Examples & common scenarios
description: Find examples, common scenarios, tutorials, and walkthroughs for Azure Logic Apps ms.suite: integration-+ Previously updated : 02/28/2020 Last updated : 07/20/2021 # Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps
You can fully develop and deploy logic apps with Visual Studio, Azure DevOps, or
* [Overview: Automate logic app deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) * [Create Azure Resource Manager templates to automate deployment for Azure Logic Apps](../logic-apps/logic-apps-create-azure-resource-manager-templates.md) * [Deploy Azure Resource Manager templates for Azure Logic Apps](../logic-apps/logic-apps-deploy-azure-resource-manager-templates.md)
-* [Sample: Connect to Azure Service Bus queues from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-azure-service-bus-queues-from-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Connect to Azure Storage accounts from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-azure-storage-accounts-from-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Set up a function app action for Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/set-up-an-azure-function-app-action-for-azure-logic-apps-and-deploy-with-azure-devops-pipelines/)
-* [Sample: Connect to an integration account from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](/samples/azure-samples/azure-logic-apps-deployment-samples/connect-to-an-integration-account-from-azure-logic-apps-and-deploy-by-using-azure-devops-pipelines/)
-* [Sample: Orchestrate Azure Pipelines by using Azure Logic Apps](/samples/azure-samples/azure-logic-apps-pipeline-orchestration/azure-devops-orchestration-with-logic-apps/)
+* [Sample: Set up an API Management action for Azure Logic Apps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/api-management-actions)
+* [Sample: Orchestrate Azure Pipelines by using Azure Logic Apps](https://github.com/Azure-Samples/azure-logic-apps-pipeline-orchestration)
+* [Sample: Connect to Azure Storage accounts from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/storage-account-connections)
+* [Sample: Connect to Azure Service Bus queues from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/service-bus-connections)
+* [Sample: Set up an Azure Functions action for Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/function-app-actions)
+* [Sample: Connect to an integration account from Azure Logic Apps and deploy with Azure Pipelines in Azure DevOps](https://github.com/Azure-Samples/azure-logic-apps-deployment-samples/tree/master/integration-account-connections)
### Manage
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
description: Secure access to inputs, outputs, request-based triggers, run histo
ms.suite: integration - Previously updated : 05/01/2021+ Last updated : 07/20/2021 # Secure access and data in Azure Logic Apps
In your ARM template, specify the IP ranges by using the `accessControl` section
### Secure data in run history by using obfuscation
-Many triggers and actions have settings to secure inputs, outputs, or both from a logic app's run history. Before using these settings to help you secure this data, review these considerations:
+Many triggers and actions have settings to secure inputs, outputs, or both from a logic app's run history. All *[managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and [custom connectors](/connectors/custom-connectors/)* support these options. However, the following [built-in operations](../connectors/built-in.md) ***don't support these options***:
+
+| Secure Inputs - Unsupported | Secure Outputs - Unsupported |
+|--||
+| Append to array variable <br>Append to string variable <br>Decrement variable <br>For each <br>If <br>Increment variable <br>Initialize variable <br>Recurrence <br>Scope <br>Set variable <br>Switch <br>Terminate <br>Until | Append to array variable <br>Append to string variable <br>Compose <br>Decrement variable <br>For each <br>If <br>Increment variable <br>Initialize variable <br>Parse JSON <br>Recurrence <br>Response <br>Scope <br>Set variable <br>Switch <br>Terminate <br>Until <br>Wait |
+|||
+
+#### Considerations for securing inputs and outputs
+Before using these settings to help you secure this data, review these considerations:
+
* When you obscure the inputs or outputs on a trigger or action, Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data) to that trigger or action for monitoring. * The [Logic Apps API for handling workflow history](/rest/api/logic/) doesn't return secured outputs.
machine-learning Algorithm Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-cheat-sheet.md
Previously updated : 03/05/2020 Last updated : 07/20/2021 adobe-target: true # Machine Learning Algorithm Cheat Sheet for Azure Machine Learning designer
The **Azure Machine Learning Algorithm Cheat Sheet** helps you choose the right
Azure Machine Learning has a large library of algorithms from the ***classification***, ***recommender systems***, ***clustering***, ***anomaly detection***, ***regression***, and ***text analytics*** families. Each is designed to address a different type of machine learning problem.
-For additional guidance, see [How to select algorithms](how-to-select-algorithms.md)
+For more information, see [How to select algorithms](how-to-select-algorithms.md).
## Download: Machine Learning Algorithm Cheat Sheet **Download the cheat sheet here: [Machine Learning Algorithm Cheat Sheet (11x17 in.)](https://download.microsoft.com/download/3/5/b/35bb997f-a8c7-485d-8c56-19444dafd757/azure-machine-learning-algorithm-cheat-sheet-nov2019.pdf?WT.mc_id=docs-article-lazzeri)**
-![Machine Learning Algorithm Cheat Sheet: Learn how to choose a Machine Learning algorithm.](./media/algorithm-cheat-sheet/machine-learning-algorithm-cheat-sheet.svg)
+![Machine Learning Algorithm Cheat Sheet: Learn how to choose a Machine Learning algorithm.](./media/algorithm-cheat-sheet/machine-learning-algorithm-cheat-sheet.png)
Download and print the Machine Learning Algorithm Cheat Sheet in tabloid size to keep it handy and get help choosing an algorithm.
Download and print the Machine Learning Algorithm Cheat Sheet in tabloid size to
The suggestions offered in this algorithm cheat sheet are approximate rules-of-thumb. Some can be bent, and some can be flagrantly violated. This cheat sheet is intended to suggest a starting point. DonΓÇÖt be afraid to run a head-to-head competition between several algorithms on your data. There is simply no substitute for understanding the principles of each algorithm and the system that generated your data.
-Every machine learning algorithm has its own style or inductive bias. For a specific problem, several algorithms may be appropriate, and one algorithm may be a better fit than others. But it's not always possible to know beforehand which is the best fit. In cases like these, several algorithms are listed together in the cheat sheet. An appropriate strategy would be to try one algorithm, and if the results are not yet satisfactory, try the others.
+Every machine learning algorithm has its own style or inductive bias. For a specific problem, several algorithms may be appropriate, and one algorithm may be a better fit than others. But it's not always possible to know beforehand, which is the best fit. In cases like these, several algorithms are listed together in the cheat sheet. An appropriate strategy would be to try one algorithm, and if the results are not yet satisfactory, try the others.
To learn more about the algorithms in Azure Machine Learning designer, go to the [Algorithm and module reference](algorithm-module-reference/module-reference.md).
In reinforcement learning, the algorithm gets to choose an action in response to
## Next steps
-* See additional guidance on [How to select algorithms](how-to-select-algorithms.md)
+* See more information on [How to select algorithms](how-to-select-algorithms.md)
* [Learn about studio in Azure Machine Learning and the Azure portal](overview-what-is-azure-ml.md).
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 07/15/2021 Last updated : 07/20/2021
To get a list of IP addresses of the Batch service and Azure Machine Learning se
> [!IMPORTANT] > The IP addresses may change over time.
-When creating the UDR, set the __Next hop type__ to __Internet__. The following image shows an example UDR in the Azure portal:
+When creating the UDR, set the __Next hop type__ to __Internet__. The following image shows an example IP address based UDR in the Azure portal:
:::image type="content" source="./media/how-to-enable-virtual-network/user-defined-route.png" alt-text="Image of a user-defined route configuration":::
Create user-defined routes for the following service tags:
* `AzureMachineLearning` * `BatchNodeManagement.<region>`, where `<region>` is your Azure region.
+The following commands demonstrate adding routes for these service tags:
+
+```azurecli
+az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n AzureMLRoute --address-prefix AzureMachineLearning --next-hop-type Internet
+az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n BatchRoute --address-prefix BatchNodeManagement.westus2 --next-hop-type Internet
+```
+ For information on configuring UDR, see [Route network traffic with a routing table](../virtual-network/tutorial-create-route-table-portal.md).
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-compute-studio.md
Previously updated : 06/18/2021 Last updated : 07/16/2021
To see all compute targets for your workspace, use the following steps:
:::image type="content" source="media/how-to-create-attach-studio/view-compute-targets.png" alt-text="View list of compute targets":::
-## <a id="portal-create"></a>Create compute target
+## <a id="portal-create"></a>Start creation process
Follow the previous steps to view the list of compute targets. Then use these steps to create a compute target:
Follow the previous steps to view the list of compute targets. Then use these st
:::image type="content" source="media/how-to-create-attach-studio/view-list.png" alt-text="View compute status from a list":::
-### <a name="compute-instance"></a> Compute instance
-
-Use the [steps above](#portal-create) to create the compute instance. Then fill out the form as follows:
-
+## <a name="compute-instance"></a> Create compute instance
+Use the [steps above](#portal-create) to start creation of the compute instance. Then fill out the form as follows:
|Field |Description | ||| |Compute name | <li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li> | |Virtual machine type | Choose CPU or GPU. This type cannot be changed after creation | |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
-|Enable/disable SSH access | SSH access is disabled by default. SSH access cannot be. changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md) |
-|Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. Also use advanced settings to specify a [setup script](how-to-create-manage-compute-instance.md#setup-script). |
-### <a name="amlcompute"></a> Compute clusters
+Select **Create** unless you want to configure advanced settings for the compute instance.
-Create a single or multi node compute cluster for your training, batch inferencing or reinforcement learning workloads. Use the [steps above](#portal-create) to create the compute cluster. Then fill out the form as follows:
+### Advanced settings
+
+Select **Next: Advanced Settings** if you want to:
+
+* Enable SSH access. Follow the [detailed instructions](#enable-ssh) below.
+* Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet.
+* Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](how-to-create-manage-compute-instance.md#on-behalf).
+* Provision with a setup script - for more details about how to create and use a setup script, see [Customize the compute instance with a script](how-to-create-manage-compute-instance.md#setup-script).
+
+### <a name="enable-ssh"></a> Enable SSH access
+
+SSH access is disabled by default. SSH access cannot be changed after creation. Make sure to enable access if you plan to debug interactively with [VS Code Remote](how-to-set-up-vs-code-remote.md).
+After you have selected **Next: Advanced Settings**:
+
+1. Turn on **Enable SSH access**.
+1. In the **SSH public key source**, select one of the options from the dropdown:
+ * If you **Generate new key pair**:
+ 1. Enter a name for the key in **Key pair name**.
+ 1. Select **Create**.
+ 1. Select **Download private key and create compute**. The key is usually downloaded into the **Downloads** folder.
+ * If you select **Use existing public key stored in Azure**, search for and select the key in **Stored key**.
+ * If you select **Use existing public key**, provide an RSA public key in the single-line format (starting with "ssh-rsa") or the multi-line PEM format. You can generate SSH keys using ssh-keygen on Linux and OS X, or PuTTYGen on Windows.
+
+Once the compute instance is created and running, see [Connect with SSH access](#ssh-access).
+
+## <a name="amlcompute"></a> Create compute clusters
+
+Create a single or multi node compute cluster for your training, batch inferencing or reinforcement learning workloads. Use the [steps above](#portal-create) to create the compute cluster. Then fill out the form as follows:
|Field |Description | |||
-|Compute name | <li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li> |
| Location | The Azure region where the compute cluster will be created. By default, this is the same location as the workspace. Setting the location to a different region than the workspace is in __preview__, and is only available for __compute clusters__, not compute instances.</br>When using a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. | |Virtual machine type | Choose CPU or GPU. This type cannot be changed after creation | |Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be preempted. |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |+
+Select **Next** to proceed to **Advanced Settings** and fill out the form as follows:
+
+|Field |Description |
+|||
+|Compute name | <li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li> |
|Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you won't pay for any nodes when the cluster is idle. | |Maximum number of nodes | Maximum number of nodes that you want to provision. The compute will autoscale to a maximum of this node count when a job is submitted. |
+| Idle seconds before scale down | Idle time before scaling the cluster down to the minimum node count. |
+| Enable SSH access | Use the same instructions as [Enable SSH access](#enable-ssh) for a compute instance (above). |
|Advanced settings | Optional. Configure a virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). For more information, see these [network requirements](./how-to-secure-training-vnet.md) for vnet. Also attach [managed identities](#managed-identity) to grant access to resources |
-#### <a name="managed-identity"></a> Set up managed identity
+### <a name="managed-identity"></a> Set up managed identity
[!INCLUDE [aml-clone-in-azure-notebook](../../includes/aml-managed-identity-intro.md)] During cluster creation or when editing compute cluster details, in the **Advanced settings**, toggle **Assign a managed identity** and specify a system-assigned identity or user-assigned identity.
-#### Managed identity usage
+### Managed identity usage
[!INCLUDE [aml-clone-in-azure-notebook](../../includes/aml-managed-identity-default.md)]
-### Inference clusters
+## <a name="inference-clusters"></a> Create inference clusters
> [!IMPORTANT] > Using Azure Kubernetes Service with Azure Machine Learning has multiple configuration options. Some scenarios, such as networking, require additional setup and configuration. For more information on using AKS with Azure ML, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md).
Create or attach an Azure Kubernetes Service (AKS) cluster for large scale infer
| Network configuration | Select **Advanced** to create the compute within an existing virtual network. For more information about AKS in a virtual network, see [Network isolation during training and inference with private endpoints and virtual networks](./how-to-secure-inferencing-vnet.md). | | Enable SSL configuration | Use this to configure SSL certificate on the compute |
-### Attached compute
+## <a name="attached-compute"></a> Attach other compute
To use compute targets created outside the Azure Machine Learning workspace, you must attach them. Attaching a compute target makes it available to your workspace. Use **Attached compute** to attach a compute target for **training**. Use **Inference clusters** to attach an AKS cluster for **inferencing**.
To detach your compute use the following steps:
1. In Azure Machine Learning studio, select __Compute__, __Attached compute__, and the compute you wish to remove. 1. Use the __Detach__ link to detach your compute.
+## <a name="ssh-access"></a> Connect with SSH access
+
+If you created your compute instance or compute cluster with SSH access enabled, use these steps for access.
+
+1. Find the compute in your workspace resources:
+ 1. On the left, select **Compute**.
+ 1. Use the tabs at the top to select **Compute instance** or **Compute cluster** to find your machine.
+1. Select the compute name in the list of resources.
+1. Find the connection string:
+
+ * For a **compute instance**, select **Connect** at the top of the **Details** section.
+
+ :::image type="content" source="media/how-to-create-attach-studio/details.png" alt-text="Screenshot: Connect tool at the top of the Details page.":::
+
+ * For a **compute cluster**, select **Nodes** at the top, then select the **Connection string** in the table for your node.
+ :::image type="content" source="media/how-to-create-attach-studio/compute-nodes.png" alt-text="Screenshot: Connection string for a node in a compute cluster.":::
+
+1. Copy the connection string.
+1. For Windows, open PowerShell or a command prompt:
+ 1. Go into the directory or folder where your key is stored
+ 1. Add the -i flag to the connection string to locate the private key and point to where it is stored:
+
+ ```ssh -i <keyname.pem> azureuser@... (rest of connection string)```
+
+1. For Linux users, follow the steps from [Create and use an SSH key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md)
+ ## Next steps After a target is created and attached to your workspace, you use it in your [run configuration](how-to-set-up-training-targets.md) with a `ComputeTarget` object:
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
Previously updated : 10/02/2020 Last updated : 07/16/2021 # Create and manage an Azure Machine Learning compute instance
Some examples of what you can do in a setup script:
### Create the setup script
-The setup script is a shell script which runs as *rootuser*. Create or upload the script into your **Notebooks** files:
+The setup script is a shell script, which runs as *rootuser*. Create or upload the script into your **Notebooks** files:
1. Sign into the [studio](https://ml.azure.com) and select your workspace. 2. On the left, select **Notebooks**
When the script runs, the current working directory of the script is the directo
Script arguments can be referred to in the script as $1, $2, etc.
-If your script was doing something specific to azureuser such as installing conda environment or jupyter kernel you will have to put it within *sudo -u azureuser* block like this
+If your script was doing something specific to azureuser such as installing conda environment or jupyter kernel, you will have to put it within *sudo -u azureuser* block like this
```shell #!/bin/bash
pip install "$PACKAGE"
conda deactivate EOF ```
-Please note *sudo -u azureuser* does change the current working directory to */home/azureuser*. You also can't access the script arguments in this block.
+The command *sudo -u azureuser* changes the current working directory to */home/azureuser*. You also can't access the script arguments in this block.
You can also use the following environment variables in your script:
Once you store the script, specify it during creation of your compute instance:
:::image type="content" source="media/how-to-create-manage-compute-instance/setup-script.png" alt-text="Provisiona compute instance with a setup script in the studio.":::
-Please note that if workspace storage is attached to a virtual network you might not be able to access the setup script file unless you are accessing the Studio from within virtual network.
+If workspace storage is attached to a virtual network you might not be able to access the setup script file unless you are accessing the Studio from within virtual network.
### Use script in a Resource Manager template
You can perform the following actions:
For each compute instance in your workspace that you created (or that was created for you), you can: * Access Jupyter, JupyterLab, RStudio on the compute instance
-* SSH into compute instance. SSH access is disabled by default but can be enabled at compute instance creation time. SSH access is through public/private key mechanism. The tab will give you details for SSH connection such as IP address, username, and port number. In case of virtual network deployment, disabling SSH prevents SSH access from public internet, you can still SSH from within virtual network using private IP address of compute instance node and port 22.
+* SSH into compute instance. SSH access is disabled by default but can be enabled at compute instance creation time. SSH access is through public/private key mechanism. The tab will give you details for SSH connection such as IP address, username, and port number. In a virtual network deployment, disabling SSH prevents SSH access from public internet, you can still SSH from within virtual network using private IP address of compute instance node and port 22.
* Get details about a specific compute instance such as IP address, and region.
These actions can be controlled by Azure RBAC:
* *Microsoft.MachineLearningServices/workspaces/computes/stop/action* * *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
-To create a compute instance you need to have permissions for the following actions:
+To create a compute instance you'll need permissions for the following actions:
* *Microsoft.MachineLearningServices/workspaces/computes/write* * *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
Title: "Deploy an ML model with a managed online endpoint"
+ Title: Deploy a machine learning model by using a managed online endpoint (preview)
-description: Learn to deploy your machine learning model as a web service automatically managed by Azure.
+description: Learn to deploy your machine learning model as a web service that's automatically managed by Azure.
-# Deploy and score a machine learning model with a managed online endpoint (preview)
+# Deploy and score a machine learning model by using a managed online endpoint (preview)
-Managed online endpoints (preview) provide you the ability to deploy your model without the need to create and manage the underlying infrastructure. In this article, you'll start by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure. You'll also learn how to view the logs and monitor the Service Level Agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that can be used for online/real-time scoring. For more information, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
+Learn how to use a managed online endpoint (preview) to deploy your model, so you don't have to create and manage the underlying infrastructure. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
+
+You'll also learn how to view the logs and monitor the service-level agreement (SLA). You start with a model and end up with a scalable HTTPS/REST endpoint that you can use for online and real-time scoring.
+
+For more information, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] ## Prerequisites
-* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* You must install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the 2.0 CLI (preview)](how-to-configure-cli.md).
+* Install and configure the Azure CLI and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the 2.0 CLI (preview)](how-to-configure-cli.md).
-* You must have an Azure resource group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it. A resource group is created in [Install, set up, and use the 2.0 CLI (preview)](how-to-configure-cli.md).
-* You must have an Azure Machine Learning workspace. You'll have such a workspace if you configured your ML extension per the above article.
+* You must have an Azure Machine Learning workspace. A workspace is created in [Install, set up, and use the 2.0 CLI (preview)](how-to-configure-cli.md).
-* If you've not already set the defaults for Azure CLI, you should save your default settings. To avoid having to repeatedly pass in the values, run:
+* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
```azurecli
- az account set --subscription <subscription id>
- az configure --defaults workspace=<azureml workspace name> group=<resource group>
+ az account set --subscription <subscription ID>
+ az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+ ```
-* [Optional] To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
+* (Optional) To deploy locally, you must [install Docker Engine](https://docs.docker.com/engine/install/) on your local computer. We *highly recommend* this option, so it's easier to debug issues.
## Prepare your system
-To follow along with the article, clone the samples repository, and navigate to the right directory by running the following commands:
+To follow along with this article, first clone the samples repository (azureml-examples). Then, run the following code to go to the samples directory:
```azurecli git clone https://github.com/Azure/azureml-examples
cd azureml-examples
cd cli ```
-Set your endpoint name (rename the below `YOUR_ENDPOINT_NAME` to a unique name). The below command is for Unix environments:
+To set your endpoint name, choose one of the following commands, depending on your operating system (replace `YOUR_ENDPOINT_NAME` with a unique name).
+
+For Unix, run this command:
```azurecli export ENDPOINT_NAME=YOUR_ENDPOINT_NAME ```
-If you use a Windows operating system, use this command instead `set ENDPOINT_NAME=YOUR_ENDPOINT_NAME`.
+For Windows, run this command:
+
+```azurecli
+set ENDPOINT_NAME=YOUR_ENDPOINT_NAME
+```
> [!NOTE]
-> Endpoint names need to be unique at the Azure region level. For example, there can be only one endpoint with the name `my-endpoint` in `westus2`.
+> Endpoint names must be unique within an Azure region. For example, in the Azure westus2 region, there can be only one endpoint with the name `my-endpoint`.
## Define the endpoint configuration
-The inputs needed to deploy a model on an online endpoint are:
+Specific inputs are required to deploy a model on an online endpoint:
-- Model files (or the name and version of a model already registered in your workspace). In the example, we have a `scikit-learn` model that does regression.-- Code that is needed to score the model. In this case, we have a `score.py` file.-- An environment in which your model is run (as you'll see, the environment may be a Docker image with conda dependencies or may be a Dockerfile).
+- Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.
+- The code that's required to score the model. In this case, we have a *score.py* file.
+- An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
- Settings to specify the instance type and scaling capacity.
-The following snippet shows the `endpoints/online/managed/simple-flow/1-create-endpoint-with-blue.yml` file that captures all the above information:
+The following snippet shows the *endpoints/online/managed/simple-flow/1-create-endpoint-with-blue.yml* file, which captures all the required inputs:
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/simple-flow/1-create-endpoint-with-blue.yml":::
-> [!Note]
-> The YAML is more completely described at [Managed online endpoints (preview) YAML reference](reference-online-endpoint-yaml.md)
+> [!NOTE]
+> For a full description of the YAML, see [Managed online endpoints (preview) YAML reference](reference-online-endpoint-yaml.md).
-The reference for the endpoint YAML format is below. To understand how to specify these attributes, refer to the YAML example from this article or to the fully specified YAML sample mentioned in the preceding note. For more on limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
+The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the YAML example in [Prepare your system](#prepare-your-system) or the [online endpoint YAML reference](reference-online-endpoint-yaml.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints-preview).
| Key | Description | | | |
-| $schema | [Optional] The YAML schema. You can view the schema in the above example in a browser to see all available options in the YAML file.|
-| name | Name of the endpoint. Needs to be unique at the Azure region level.|
-| traffic | Percentage of traffic from endpoint to divert to each deployment. Traffic values need to sum to 100 |
-| auth_mode | use `key` for key based authentication and `aml_token` for Azure machine learning token-based authentication. `key` doesn't expire but `aml_token` does. Get the most recent token with the `az ml endpoint get-credentials` command). |
-| deployments | Contains a list of deployments to be created in the endpoint. In this case, we have only one deployment, named `blue`. For more on multiple deployments, see [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)|
+| `$schema` | (Optional) The YAML schema. To see all available options in the YAML file, you can view the schema in the preceding example in a browser.|
+| `name` | The name of the endpoint. It must be unique in the Azure region.|
+| `traffic` | The percentage of traffic from the endpoint to divert to each deployment. The sum of traffic values must be 100. |
+| `auth_mode` | Use `key` for key-based authentication or use `aml_token` for Azure Machine Learning token-based authentication. `key` doesn't expire, but `aml_token` does expire. (Get the most recent token by using the `az ml endpoint get-credentials` command.) |
+| `deployments` | The list of deployments to be created in the endpoint. In this case, we have only one deployment, named `blue`. For more information about multiple deployments, see [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md).|
-Attributes of the `deployments`:
+The next table describes the attributes of `deployments`:
| Key | Description | | | |
-| name | Name of the deployment |
-| model | In this example, we specify the model properties inline: `name`, `version`, and `local_path`. The model files will be uploaded and registered automatically. A downside of inline specification is that you must increment the version manually if you want to update the model files. Read the **Tip** in the below section for related best practices. |
-| code_configuration.code.local_path | The directory that contains all the Python source code for scoring the model. Nested directories/packages are supported. |
-| code_configuration.scoring_script | The Python file in the above scoring directory. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, and so forth). The `run()` function is called at every invocation of the endpoint to do the actual scoring/prediction. |
-| environment | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include `name`, `version`, and `path`. In this example, `environment.docker.image` will be used as the image and the `conda_file` dependencies will be installed on top of it. For more information, see the **Tip** in the below section. |
-| instance_type | The VM SKU to host your deployment instances. For more, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). |
-| scale_settings.scale_type | Currently, this value must be `manual`. To scale up or scale down after the endpoint and deployment are created, update the `instance_count` in the YAML and run the command `az ml endpoint update -n $ENDPOINT_NAME --file <yaml filepath>`.|
-| scale_settings.instance_count | Number of instances in the deployment. Base the value on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. |
+| `name` | The name of the deployment. |
+| `model` | In this example, we specify these model properties inline: `name`, `version`, and `local_path`. Model files are automatically uploaded and registered. A downside of inline specification is that you must increment the version manually if you want to update the model files. For related best practices, see the tip in the next section. |
+| `code_configuration.code.local_path` | The directory that contains all the Python source code for scoring the model. You can use nested directories and packages. |
+| `code_configuration.scoring_script` | The Python file that's in the `code_configuration.code.local_path` scoring directory. This Python code must have an `init()` function and a `run()` function. The function `init()` will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. |
+| `environment` | Contains the details of the environment to host the model and code. In this example, we have inline definitions that include `name`, `version`, and `path`. We'll use `environment.docker.image` for the image. The `conda_file` dependencies will be installed on top of the image. For more information, see the tip in the next section. |
+| `instance_type` | The VM SKU that will host your deployment instances. For more information, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). |
+| `scale_settings.scale_type` | Currently, this value must be `manual`. To scale up or scale down after you create the endpoint and deployment, update `instance_count` in the YAML and run the command `az ml endpoint update -n $ENDPOINT_NAME --file <yaml filepath>`.|
+| `scale_settings.instance_count` | The number of instances in the deployment. Base the value on the workload you expect. For high availability, we recommend that you set `scale_settings.instance_count` to at least `3`. |
-For more information on the YAML schema, see [online endpoint YAML reference](reference-online-endpoint-yaml.md) document.
+For more information about the YAML schema, see the [online endpoint YAML reference](reference-online-endpoint-yaml.md).
-> [!Note]
-> To use Azure Kubernetes Service (AKS) as a compute target instead of managed endpoints:
-> 1. Create and attach your AKS cluster as a compute target to your Azure Machine Learning workspace [using Azure ML Studio](how-to-create-attach-compute-studio.md#whats-a-compute-target)
-> 2. Use this [endpoint YAML](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/aks/simple-flow/1-create-aks-endpoint-with-blue.yml) to target AKS instead of the above managed endpoint YAML. You'll need to edit the YAML to change the value of `target` to the name of your registered compute target.
-> This article's commands, except for the optional SLA monitoring and Log Analytics integration, are interchangeable between managed and AKS endpoints.
+> [!NOTE]
+> To use Azure Kubernetes Service (AKS) instead of managed endpoints as a compute target:
+> 1. Create and attach your AKS cluster as a compute target to your Azure Machine Learning workspace by using [Azure ML Studio](how-to-create-attach-compute-studio.md#whats-a-compute-target).
+> 1. Use the [endpoint YAML](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/aks/simple-flow/1-create-aks-endpoint-with-blue.yml) to target AKS instead of the managed endpoint YAML. You'll need to edit the YAML to change the value of `target` to the name of your registered compute target.
+>
+> All the commands that are used in this article (except the optional SLA monitoring and Azure Log Analytics integration) can be used either with managed endpoints or with AKS endpoints.
-### Registering your model and environment separately
+### Register your model and environment separately
- In this example, we're specifying the model and environment properties inline: `name`, `version`, and the `local_path` from which to upload files. Under the covers, the CLI will upload the files and register the model and environment automatically. As a best practice for production, you should separately register the model and environment and specify the registered name and version in the YAML. The form is `model: azureml:my-model:1` or `environment: azureml:my-env:1`.
+In this example, we specify the model and environment properties inline: `name`, `version`, and `local_path` (where to upload files from). The CLI automatically uploads the files and registers the model and environment. As a best practice for production, you should register the model and environment and specify the registered name and version separately in the YAML. Use the form `model: azureml:my-model:1` or `environment: azureml:my-env:1`.
- To do the registration, you may extract the YAML definitions of `model` and `environment` into separate YAML files and use the commands `az ml model create` and `az ml environment create`. To learn more about these commands, run `az ml model create -h` and `az ml environment create -h`.
+For registration, you can extract the YAML definitions of `model` and `environment` into separate YAML files and use the commands `az ml model create` and `az ml environment create`. To learn more about these commands, run `az ml model create -h` and `az ml environment create -h`.
-### Using different CPU & GPU instance types
+### Use different CPU and GPU instance types
-The above YAML uses a general purpose type (`Standard_F2s_v2`) and a non-GPU Docker image (in the YAML see the `image` attribute). For GPU compute, you should choose a GPU compute type SKU and a GPU Docker image.
+The preceding YAML uses a general-purpose type (`Standard_F2s_v2`) and a non-GPU Docker image (in the YAML, see the `image` attribute). For GPU compute, choose a GPU compute type SKU and a GPU Docker image.
-You can see the supported general purpose and GPU instance types in [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). A list of Azure ML CPU & GPU base images can be found at [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
+For supported general-purpose and GPU instance types, see [Managed online endpoints supported VM SKUs](reference-managed-online-endpoints-vm-sku-list.md). For a list of Azure Machine Learning CPU and GPU base images, see [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers).
-### Using more than one model
+### Use more than one model
-Currently, you can specify only one model per deployment in the YAML. If you have more than one model, you can work around this limitation: when you register the model, copy all the models (as files or subdirectories) into a folder that you use for registration. In your scoring script, you can use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder; the underlying directory structure is retained.
+Currently, you can specify only one model per deployment in the YAML. If you have more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained.
## Understand the scoring script
-> [!Tip]
-> The format of the scoring script for managed online endpoints is the same format used in the previous version of the CLI and in the Python SDK.
+> [!TIP]
+> The format of the scoring script for managed online endpoints is the same format that's used in the preceding version of the CLI and in the Python SDK.
-As referred to in the above YAML, the `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses this [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py). The `init()` function is called when the container is initialized/started. This initialization typically occurs shortly after the deployment is created or updated. Write logic here to do global initialization operations like caching the model in memory (as is done in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring/prediction. In the example, we extract the data from the JSON input, call the `scikit-learn` model's `predict()` method, and return the result.
+As noted earlier, the `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py). The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
-## Deploy and debug locally using local endpoints
+## Deploy and debug locally by using local endpoints
-To save time in debugging, it's **highly recommended** you test-run your endpoint locally.
+To save time debugging, we *highly recommend* that you test-run your endpoint locally.
-> [!Note]
-> * To deploy locally, you must have installed [Docker engine](https://docs.docker.com/engine/install/)
-> * Your Docker engine must be running. Typically, the engine launches at startup. If it doesn't, you can [troubleshoot here](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
+> [!NOTE]
+> * To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed.
+> * Docker Engine must be running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
-> [!Important]
-> The goal of a local endpoint deployment is to validate and debug your code and configuration before deploying to Azure. Local deployment has the following limitations:
-> - Local endpoints do **not** support traffic rules, authentication, scale settings, or probe settings.
-> - Local endpoints only support one deployment per endpoint. That is, in a local deployment you can't use a reference to a model or environment registered in your Azure Machine Learning workspace.
+> [!IMPORTANT]
+> The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:
+> - Local endpoints do *not* support traffic rules, authentication, scale settings, or probe settings.
+> - Local endpoints support only one deployment per endpoint. That is, in a local deployment, you can't use a reference to a model or environment that's registered in your Azure Machine Learning workspace.
### Deploy the model locally
-To deploy the model locally, run the following command:
+To deploy the model locally:
```azurecli az ml endpoint create --local -n $ENDPOINT_NAME -f endpoints/online/managed/simple-flow/1-create-endpoint-with-blue.yml ```
-The `--local` flag directs the CLI to deploy the endpoint in the Docker environment.
+> [!NOTE]
+> If you use a Windows operating system, use `%ENDPOINT_NAME%` instead of `$ENDPOINT_NAME` here and in subsequent commands
->[!NOTE]
->If you use a Windows operating system, use `%ENDPOINT_NAME%` instead of `$ENDPOINT_NAME` here and in subsequent commands
+The `--local` flag directs the CLI to deploy the endpoint in the Docker environment.
-### Check if the local deployment succeeded
+### Verify the local deployment succeeded
-Check if the model was deployed without error by checking the logs:
+Check the logs to see whether the model was deployed without error:
```azurecli az ml endpoint get-logs --local -n $ENDPOINT_NAME --deployment blue ```
-### Invoke the local endpoint to score data with your model
+### Invoke the local endpoint to score data by using your model
-Invoke the endpoint to score the model by using the convenience command `invoke` and passing query parameters stored in a JSON file:
+Invoke the endpoint to score the model by using the convenience command `invoke` and passing query parameters that are stored in a JSON file:
```azurecli az ml endpoint invoke --local -n $ENDPOINT_NAME --request-file endpoints/online/model-1/sample-request.json ```
-If you would like to use a REST client (such as curl), you need the scoring URI. You can get it using the command `az ml endpoint show --local -n $ENDPOINT_NAME`. In the returned data, you'll find an attribute named `scoring_uri`.
+If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run `az ml endpoint show --local -n $ENDPOINT_NAME`. In the returned data, find the `scoring_uri` attribute.
### Review the logs for output from the invoke operation
-In the example `score.py`, the `run()` method logs some output to the console. You can view this output by using the `get-logs` command again:
+In the example *score.py* file, the `run()` method logs some output to the console. You can view this output by using the `get-logs` command again:
```azurecli az ml endpoint get-logs --local -n $ENDPOINT_NAME --deployment blue ```
-## Deploy your managed online endpoint to Azure
+## Deploy your managed online endpoint to Azure
+
+Next, deploy your managed online endpoint to Azure.
### Deploy to Azure
-To deploy the YAML configuration to the cloud, run the following command:
+To deploy the YAML configuration to the cloud, run the following code:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="deploy" :::
-This deployment can take approximately up to 15 minutes depending on whether the underlying environment/image is being built for the first time. Subsequent deployments using the same environment will go quicker.
-
-> [!Tip]
-> If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status.
+This deployment might take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.
-> [!Tip]
-> Use [Troubleshooting managed online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md) to debug errors.
+> [!TIP]
+> * If you prefer not to block your CLI console, you may add the flag `--no-wait` to the command. However, this will stop the interactive display of the deployment status.
+>
+> * Use [Troubleshooting managed online endpoints deployment (preview)](how-to-troubleshoot-managed-online-endpoints.md) to debug errors.
### Check the status of the deployment
-The `show` command contains `provisioning_status` for both endpoint and deployment:
+The `show` command contains information in `provisioning_status` for endpoint and deployment:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_status" :::
-You may list all the endpoints in the workspace in a table format with the `list` command:
+You can list all the endpoints in the workspace in a table format by using the `list` command:
```azurecli az ml endpoint list --output table ```
-### Check if the cloud deployment succeeded
+### Check the status of the cloud deployment
-Check if the model was deployed without error by checking the logs:
+Check the logs to see whether the model was deployed without error:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
-By default, logs are pulled from the inference-server. If you want to see the logs from the storage-initializer (which mounts the assets such as model and code to the container), add the flag `--container storage-initializer`.
+By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
-### Invoke the endpoint to score data with your model
+### Invoke the endpoint to score data by using your model
You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data: ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="test_endpoint" :::
-You can again use the `get-logs` command shown previously to see the invocation logs.
+To see the invocation logs, run `get-logs` again.
-To use a REST client, you'll need the `scoring_uri` and the auth key/token. The `scoring_uri` is available in the output of the `show` command:
+To use a REST client, you must have the value for `scoring_uri` and the authentication key or token. The `scoring_uri` value is available in the output of the `show` command:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_scoring_uri" :::
-Note how we're using the `--query` to filter attributes to only what are needed. You can learn more about `--query` at [Query Azure CLI command output](/cli/azure/query-azure-cli).
+We're using the `--query` flag to filter attributes to only what we need. To learn more about `--query`, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
-Retrieve the necessary credentials using the `get-credentials` command:
+Retrieve the required credentials by using the `get-credentials` command:
```azurecli az ml endpoint get-credentials -n $ENDPOINT_NAME ```
-### [Optional] Update the deployment
+### (Optional) Update the deployment
-If you want to update the code, model, environment, or your scale settings, update the YAML file and run the `az ml endpoint update` command.
+If you want to update the code, model, environment, or your scale settings, update the YAML file, and then run the `az ml endpoint update` command.
->[!IMPORTANT]
-> You can only modify **one** aspect (traffic, scale settings, code, model, or environment) in a single `update` command.
+> [!IMPORTANT]
+> You can modify only *one* aspect (traffic, scale settings, code, model, or environment) in a single `update` command.
To understand how `update` works:
-1. Open the file `online/model-1/onlinescoring/score.py`.
-1. Change the last line of the `init()` function: after `logging.info("Init complete")`, add `logging.info("Updated successfully")`.
-1. Save the file
-1. Run the command:
-```azurecli
-az ml endpoint update -n $ENDPOINT_NAME -f endpoints/online/managed/simple-flow/1-create-endpoint-with-blue.yml
-```
+1. Open the file *online/model-1/onlinescoring/score.py*.
+1. Change the last line of the `init()` function: After `logging.info("Init complete")`, add `logging.info("Updated successfully")`.
+1. Save the file.
+1. Run this command:
-> [!IMPORTANT]
-> Update using the YAML is declarative. That is, changes in the YAML will be reflected in the underlying Azure Resource Manager resources (endpoints & deployments). This approach facilitates [GitOps](https://www.atlassian.com/git/tutorials/gitops): *ALL* changes to endpoints/deployments go through the YAML (even `instance_count`). As a side effect, if you remove a deployment from the YAML and run `az ml endpoint update` using the file, that deployment will be deleted. You may make updates without using the YAML using the `--set ` flag, as described in the following Tip.
+ ```azurecli
+ az ml endpoint update -n $ENDPOINT_NAME -f endpoints/online/managed/simple-flow/1-create-endpoint-with-blue.yml
+ ```
-5. Because you modified the `init()` function, which runs when the endpoint is created or updates, the message `Updated successfully` will be in the logs. Retrieve the logs by running:
-```azurecli
-az ml endpoint get-logs -n $ENDPOINT_NAME --deployment blue
-```
+ > [!IMPORTANT]
+ > Updating by using YAML is declarative. That is, changes in the YAML are reflected in the underlying Azure Resource Manager resources (endpoints and deployments). A declarative approach facilitates [GitOps](https://www.atlassian.com/git/tutorials/gitops): *All* changes to endpoints and deployments (even `instance_count`) go through the YAML. As a result, if you remove a deployment from the YAML and run `az ml endpoint update` by using the file, the deployment will be deleted. You can make updates without using the YAML by using the `--set` flag.
+
+1. Because you modified the `init()` function (`init()` runs when the endpoint is created or updated), the message `Updated successfully` will be in the logs. Retrieve the logs by running:
+
+ ```azurecli
+ az ml endpoint get-logs -n $ENDPOINT_NAME --deployment blue
+ ```
-In the rare case that you want to delete and recreate your deployment because of an irresolvable issue, use:
+In the rare case that you want to delete and re-create your deployment because of an irresolvable issue, run:
```azurecli az ml endpoint delete -n $ENDPOINT_NAME --deployment blue ```
-The `update` command works with local endpoints as well. Use the same `az ml endpoint update` command with the flag `--local`.
+The `update` command also works with local endpoints. Use the same `az ml endpoint update` command with the `--local` flag.
-> [!Tip]
-> With the `az ml endpoint update` command, you may use the [`--set` parameter available in Azure CLI](/cli/azure/use-cli-effectively#generic-update-arguments) to override attributes in your YAML **or** for setting specific attributes without passing the YAML file. Use of `--set` for single attributes is especially valuable in dev/test scenarios. For example, to scale up the `instance_count` of the first deployment, you could use the flag `--set deployments[0].scale_settings.instance_count=2`. However, since the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops).
+> [!TIP]
+> With the `az ml endpoint update` command, you can use the [`--set` parameter in the Azure CLI](/cli/azure/use-cli-effectively#generic-update-arguments) to override attributes in your YAML *or* to set specific attributes without passing the YAML file. Using `--set` for single attributes is especially valuable in development and test scenarios. For example, to scale up the `instance_count` value for the first deployment, you could use the `--set deployments[0].scale_settings.instance_count=2` flag. However, because the YAML isn't updated, this technique doesn't facilitate [GitOps](https://www.atlassian.com/git/tutorials/gitops).
-### [Optional] Monitor SLA using Azure Monitor
+### (Optional) Monitor SLA by using Azure Monitor
-You can view metrics and set alerts based on your SLA by following instructions in [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
+To view metrics and set alerts based on your SLA, complete the steps that are described in [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
-### [Optional] Integrate with Log Analytics
+### (Optional) Integrate with Log Analytics
-The `get-logs` command will only provide the last few-hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to store and analyze logs durably. First, follow the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md#create-a-workspace) to create a Log Analytics workspace.
+The `get-logs` command provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs.
+
+First, create a Log Analytics workspace by completing the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md#create-a-workspace).
Then, in the Azure portal:
-1. Go to the resource group
-1. Choose your endpoint
-1. Select the **ARM resource page**
-1. Select **Diagnostic settings**
-1. Select **Add settings**: Enable sending console logs to the log analytics workspace
+1. Go to the resource group.
+1. Select your endpoint.
+1. Select the **ARM resource page**.
+1. Select **Diagnostic settings**.
+1. Select **Add settings**.
+1. Select to enable sending console logs to the Log Analytics workspace.
-Note that it might take up to an hour for the logs to be connected. Send some scoring requests after this time period and then check the logs using the following steps:
+The logs might take up to an hour to connect. After an hour, send some scoring requests, and then check the logs by using the following steps:
-1. Open the Log Analytics workspace
-1. Select **Logs** in the left navigation area
-1. Close the **Queries** popup that automatically opens
-1. Double-click on **AmlOnlineEndpointConsoleLog**
-1. Select *Run*
+1. Open the Log Analytics workspace.
+1. In the left menu, select **Logs**.
+1. Close the **Queries** dialog that automatically opens.
+1. Double-click **AmlOnlineEndpointConsoleLog**.
+1. Select **Run**.
-## Delete the endpoint and deployment
+## Delete the endpoint and the deployment
-If you aren't going use the deployment, you should delete it with the below command (it deletes the endpoint and all the underlying deployments):
+If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments):
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="delete_endpoint" ::: ## Next steps+
+To learn more, review these articles:
+ - [Deploy models with REST (preview)](how-to-deploy-with-rest.md) - [Create and use managed online endpoints (preview) in the studio](how-to-use-managed-online-endpoint-studio.md) - [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md) - [Use batch endpoints (preview) for batch scoring](how-to-use-batch-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)-- [Tutorial: Access Azure resources with a managed online endpoint and system-managed identity (preview)](tutorial-deploy-managed-endpoints-using-system-managed-identity.md)-- [Troubleshooting managed online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)
+- [Tutorial: Access Azure resources by using a managed online endpoint and system-managed identity (preview)](tutorial-deploy-managed-endpoints-using-system-managed-identity.md)
+- [Troubleshoot managed online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-studio-virtual-network.md
Previously updated : 06/11/2021 Last updated : 07/13/2021
See the other articles in this series:
[1. VNet overview](how-to-network-security-overview.md) > [2. Secure the workspace](how-to-secure-workspace-vnet.md) > [3. Secure the training environment](how-to-secure-training-vnet.md) > [4. Secure the inferencing environment](how-to-secure-inferencing-vnet.md) > **5. Enable studio functionality**
-> [!IMPORTANT]
-> If your workspace is in a __sovereign cloud__, such as Azure Government or Azure China 21Vianet, integrated notebooks _do not_ support using storage that is in a virtual network. Instead, you can use Jupyter Notebooks from a compute instance. For more information, see the [Access data in a Compute Instance notebook](how-to-secure-training-vnet.md#access-data-in-a-compute-instance-notebook) section.
- ## Prerequisites + Read the [Network security overview](how-to-network-security-overview.md) to understand common virtual network scenarios and architecture.
See the other articles in this series:
+ An existing [Azure storage account added your virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-service-endpoints).
+## Limitations
+
+### Azure Storage Account
+
+There's a known issue where the default file store does not automatically create the `azureml-filestore` folder, which is required to submit AutoML experiments. This occurs when users bring an existing filestore to set as the default filestore during workspace creation.
+
+To avoid this issue, you have two options: 1) Use the default filestore which is automatically created for you doing workspace creation. 2) To bring your own filestore, make sure the filestore is outside of the VNet during workspace creation. After the workspace is created, add the storage account to the virtual network.
+
+To resolve this issue, remove the filestore account from the virtual network then add it back to the virtual network.
## Datastore: Azure Storage Account Use the following steps to enable access to data stored in Azure Blob and File storage:
Use the following steps to enable access to data stored in Azure Blob and File s
|Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment will fail regardless of any other datastores in use.| |Workspace default file store| Stores AutoML experiment assets. Enable managed identity authentication on this storage account to submit AutoML experiments. |
- > [!WARNING]
- > There's a known issue where the default file store does not automatically create the `azureml-filestore` folder, which is required to submit AutoML experiments. This occurs when users bring an existing filestore to set as the default filestore during workspace creation.
- >
- > To avoid this issue, you have two options: 1) Use the default filestore which is automatically created for you doing workspace creation. 2) To bring your own filestore, make sure the filestore is outside of the VNet during workspace creation. After the workspace is created, add the storage account to the virtual network.
- >
- > To resolve this issue, remove the filestore account from the virtual network then add it back to the virtual network.
- 1. **Configure datastores to use managed identity authentication**. After you add an Azure storage account to your virtual network with a either a [service endpoint](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-service-endpoints) or [private endpoint](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-private-endpoints), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account. Azure Machine Learning uses [datastores](concept-data.md#datastores) to connect to storage accounts. When creating a new datastore, use the following steps to configure a datastore to use managed identity authentication:
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-log-view-metrics.md
Use the following methods in the logging APIs to influence the metrics visualiza
|Log image|`run.log_image(name='food', path='./breadpudding.jpg', plot=None, description='desert')`|Use this method to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record| ## Logging with MLflow
-Use MLFlowLogger to log metrics.
-```python
-from azureml.core import Run
-# connect to the workspace from within your running code
-run = Run.get_context()
-ws = run.experiment.workspace
+We recommend logging your models, metrics and artifacts with MLflow as it's open source and it supports local mode to cloud portability. The following table and code examples show how to use MLflow to log metrics and artifacts from your training runs.
+[Learn more about MLflow's logging methods and design patterns](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.log_artifact).
-# workspace has associated ml-flow-tracking-uri
-mlflow_url = ws.get_mlflow_tracking_uri()
+Be sure to install the `mlflow` and `azureml-mlflow` pip packages to your workspace.
-#Example: PyTorch Lightning
-from pytorch_lightning.loggers import MLFlowLogger
+```conda
+pip install mlflow
+pip install azureml-mlflow
+```
+
+Set the MLflow tracking URI to point at the Azure Machine Learning backend to ensure that your metrics and artifacts are logged to your workspace.
-mlf_logger = MLFlowLogger(experiment_name=run.experiment.name, tracking_uri=mlflow_url)
-mlf_logger._run_id = run.id
+```python
+from azureml.core import Workspace
+import mlflow
+from mlflow.tracking import MlflowClient
+
+ws = Workspace.from_config()
+mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+
+mlflow.create_experiment("mlflow-experiment")
+mlflow.set_experiment("mlflow-experiment")
+mlflow_run = mlflow.start_run()
```
+|Logged Value|Example code| Notes|
+|-|-|-|
+|Log a numeric value (int or float) | `mlfow.log_metric('my_metric', 1)`| |
+|Log a boolean value | `mlfow.log_metric('my_metric', 0)`| 0 = True, 1 = False|
+|Log a string | `mlfow.log_text('foo', 'my_string')`| Logged as an artifact|
+|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`||
+|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
+ ## View run metrics via the SDK
-You can view the metrics of a trained model using ```run.get_metrics()```. See the example below.
+You can view the metrics of a trained model using `run.get_metrics()`.
```python from azureml.core import Run
run = Run.get_context()
run.log('metric-name', metric_value) metrics = run.get_metrics()
-# metrics is of type Dict[str, List[float]] mapping mertic names
+# metrics is of type Dict[str, List[float]] mapping metric names
# to a list of the values for that metric in the given run. metrics.get('metric-name') # list of metrics in the order they were recorded ```
+You can also access run information using MLflow through the run object's data and info properties. See the [MLflow.entities.Run object](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) documentation for more information.
+
+After the run completes, you can retrieve it using the MlFlowClient().
+
+```python
+from mlflow.tracking import MlflowClient
+
+# Use MlFlow to retrieve the run that was just completed
+client = MlflowClient()
+finished_mlflow_run = MlflowClient().get_run(mlflow_run.info.run_id)
+```
+
+You can view the metrics, parameters, and tags for the run in the data field of the run object.
+
+```python
+metrics = finished_mlflow_run.data.metrics
+tags = finished_mlflow_run.data.tags
+params = finished_mlflow_run.data.params
+```
+
+>[!NOTE]
+> The metrics dictionary under `mlflow.entities.Run.data.metrics` only returns the most recently logged value for a given metric name. For example, if you log, in order, 1, then 2, then 3, then 4 to a metric called `sample_metric`, only 4 is present in the metrics dictionary for `sample_metric`.
+>
+> To get all metrics logged for a particular metric name, you can use [`MlFlowClient.get_metric_history()`](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.get_metric_history).
+ <a name="view-the-experiment-in-the-web-portal"></a>
-## View run metrics in AML studio UI
+## View run metrics in the studio UI
You can browse completed run records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-datasets.md
--++ Last updated 06/25/2020
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-inferencing-vnet.md
-- Previously updated : 06/14/2021++ Last updated : 07/13/2021
In this article you learn how to secure the following inferencing resources in a
For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking)
+## Limitations
+
+### Azure Container Instances
+
+* When using Azure Container Instances in a virtual network, the virtual network must be in the same resource group as your Azure Machine Learning workspace.
+* If your workspace has a __private endpoint__, the virtual network used for Azure Container Instances must be the same as the one used by the workspace private endpoint.
+* When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace can't be in the virtual network.
+ <a id="aksvnet"></a> ## Azure Kubernetes Service
aks_target.wait_for_completion(show_output = True)
## Enable Azure Container Instances (ACI)
-Azure Container Instances are dynamically created when deploying a model. To enable Azure Machine Learning to create ACI inside the virtual network, you must enable __subnet delegation__ for the subnet used by the deployment.
-
-> [!WARNING]
-> When using Azure Container Instances in a virtual network, the virtual network must be:
-> * In the same resource group as your Azure Machine Learning workspace.
-> * If your workspace has a __private endpoint__, the virtual network used for Azure Container Instances must be the same as the one used by the workspace private endpoint.
->
-> When using Azure Container Instances inside the virtual network, the Azure Container Registry (ACR) for your workspace cannot be in the virtual network.
-
-To use ACI in a virtual network to your workspace, use the following steps:
+Azure Container Instances are dynamically created when deploying a model. To enable Azure Machine Learning to create ACI inside the virtual network, you must enable __subnet delegation__ for the subnet used by the deployment. To use ACI in a virtual network to your workspace, use the following steps:
1. To enable subnet delegation on your virtual network, use the information in the [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md) article. You can enable delegation when creating a virtual network, or add it to an existing network.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 06/30/2021 Last updated : 07/20/2021
In this article you learn how to secure the following training compute resources
For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking)
-## Required public internet access
+### Azure Machine Learning compute cluster/instance
+* The virtual network must be in the same subscription as the Azure Machine Learning workspace.
+* The subnet used for the compute instance or cluster must have enough unassigned IP addresses.
-For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](how-to-access-azureml-behind-firewall.md).
+ * A compute cluster can dynamically scale. If there aren't enough unassigned IP addresses, the cluster will be partially allocated.
+ * A compute instance only requires one IP address.
-## <a name="compute-instance"></a>Compute clusters & instances
+* Make sure that there are no security policies or locks that restrict permissions to manage the virtual network. When checking for policies or locks, look at both the subscription and resource group for the virtual network.
+* Check to see whether your security policies or locks on the virtual network's subscription or resource group restrict permissions to manage the virtual network.
+* If you plan to secure the virtual network by restricting traffic, see the [Required public internet access](#required-public-internet-access) section.
+* The subnet used to deploy compute cluster/instance shouldn't be delegated to any other service. For example, it shouldn't be delegated to ACI.
-To use either a [managed Azure Machine Learning __compute target__](concept-compute-target.md#azure-machine-learning-compute-managed) or an [Azure Machine Learning compute __instance__](concept-compute-instance.md) in a virtual network, the following network requirements must be met:
-> [!div class="checklist"]
-> * The virtual network must be in the same subscription as the Azure Machine Learning workspace.
-> * The subnet that's specified for the compute instance or cluster must have enough unassigned IP addresses to accommodate the number of VMs that are targeted. If the subnet doesn't have enough unassigned IP addresses, a compute cluster will be partially allocated.
-> * Check to see whether your security policies or locks on the virtual network's subscription or resource group restrict permissions to manage the virtual network.
-> * If you plan to secure the virtual network by restricting traffic, see the [Required public internet access](#required-public-internet-access) section.
-> * If you're going to put multiple compute instances or clusters in one virtual network, you might need to request a quota increase for one or more of your resources.
-> * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network and subnet as the Azure Machine Learning compute instance or cluster. Please configure your storage firewall settings to allow communication to virtual network and subnet compute resides in. Please note selecting checkbox for "Allow trusted Microsoft services to access this account" is not sufficient to allow communication from compute.
-> * For compute instance Jupyter functionality to work, ensure that web socket communication is not disabled. Please ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms.
-> * When compute instance is deployed in a private link workspace it can be only be accessed from within virtual network. If you are using custom DNS or hosts file please add an entry for `<instance-name>.<region>.instances.azureml.ms` with private IP address of workspace private endpoint. For more information, see the [custom DNS](./how-to-custom-dns.md) article.
-> * The subnet used to deploy compute cluster/instance should not be delegated to any other service like ACI
-> * Virtual network service endpoint policies do not work for compute cluster/instance system storage accounts
-> * If storage and compute instance are in different regions you might see intermittent timeouts
+### Azure Databricks
+
+* The virtual network must be in the same subscription and region as the Azure Machine Learning workspace.
+* If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
+
+## Limitations
+
+### Azure Machine Learning compute cluster/instance
+
+* If put multiple compute instances or clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates additional networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
+
+ * One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+
+ * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
+ * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
+
+ The following screenshot shows an example of these rules:
+
+ :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
+
+ * One public IP address. If you have Azure policy prohibiting Public IP creation then deployment of cluster/instances will fail
+ * One load balancer
+
+ For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
+
+ For a compute instance, these resources are kept until the instance is deleted. Stopping the instance does not remove the resources.
-### Dynamically allocated resources
+ > [!IMPORTANT]
+ > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure policy which prohibits creation of network security groups.
-The Machine Learning compute instance or cluster automatically allocates additional networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
+* If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations:
-* One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+ * If you plan to use Azure Machine Learning __studio__ to visualize data or use designer, the storage account must be __in the same subnet as the compute instance or cluster__.
+ * If you plan to use the __SDK__, the storage account can be in a different subnet.
- * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
- * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
+ > [!NOTE]
+ > Selecting the checkbox for "Allow trusted Microsoft services to access this account" is not sufficient to allow communication from the compute.
- The following screenshot shows an example of these rules:
+* When your workspace uses a private endpoint, the compute instance can only be accessed from inside the virtual network. If you use a custom DNS or hosts file, add an entry for `<instance-name>.<region>.instances.azureml.ms`. Map this entry to the private IP address of the workspace private endpoint. For more information, see the [custom DNS](./how-to-custom-dns.md) article.
+* Virtual network service endpoint policies don't work for compute cluster/instance system storage accounts.
+* If storage and compute instance are in different regions, you may see intermittent timeouts.
+* If you want to use Jupyter Notebooks on a compute instance:
- :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
+ * Don't disable websocket communication. Make sure your network allows websocket communication to `*.instances.azureml.net` and `*.instances.azureml.ms`.
+ * Make sure that your notebook is running on a compute resource behind the same virtual network and subnet as your data. When creating the compute instance, use **Advanced settings** > **Configure virtual network** to select the network and subnet.
-* One public IP address. If you have Azure policy prohibiting Public IP creation then deployment of cluster/instances will fail
-* One load balancer
+* __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
-For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
+ * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
+ * If you are using a private endpoint-enabled workspace, creating the cluster in a different region is __not supported__.
+ * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-For a compute instance, these resources are kept until the instance is deleted. Stopping the instance does not remove the resources.
+ Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
-> [!IMPORTANT]
-> These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure policy which prohibits creation of network security groups.
+### Azure Databricks
-### Create a compute cluster in a virtual network
+* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
-> [!IMPORTANT]
-> Compute clusters can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
->
-> * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
-> * If you are using a private endpoint-enabled workspace, creating the cluster in a different region is __not supported__.
-> * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+### Azure HDInsight or virtual machine
-Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
+* Azure Machine Learning supports only virtual machines that are running Ubuntu.
+
+## Required public internet access
++
+For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](how-to-access-azureml-behind-firewall.md).
+
+## <a name="compute-instance"></a>Compute clusters & instances
+
+Use the tabs below to select how you plan to create a compute cluster:
# [Studio](#tab/azure-studio)
When the creation process finishes, you train your model by using the cluster in
[!INCLUDE [low-pri-note](../../includes/machine-learning-low-pri-vm.md)]
-### Access data in a compute instance notebook
-
-If you're using notebooks on an Azure Machine Learning compute instance, you must ensure that your notebook is running on a compute resource behind the same virtual network and subnet as your data.
-
-You must configure your Compute Instance to be in the same virtual network during creation under **Advanced settings** > **Configure virtual network**. You cannot add an existing Compute Instance to a virtual network.
- ## Azure Databricks
-To use Azure Databricks in a virtual network with your workspace, the following requirements must be met:
-
-> [!div class="checklist"]
-> * The virtual network must be in the same subscription and region as the Azure Machine Learning workspace.
-> * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
-> * In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
- For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). <a id="vmorhdi"></a> ## Virtual machine or HDInsight cluster
-> [!IMPORTANT]
-> Azure Machine Learning supports only virtual machines that are running Ubuntu.
- In this section, you learn how to use a virtual machine or Azure HDInsight cluster in a virtual network with your workspace. ### Create the VM or HDInsight cluster
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-with-datasets.md
with open(mounted_input_path, 'r') as f:
Mounting or downloading files of any format are supported for datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL.
-When you **mount** a dataset, you attach the files referenced by the dataset to a directory (mount point) and make it available on the compute target. Mounting is supported for Linux-based computes, including Azure Machine Learning Compute, virtual machines, and HDInsight.
+When you **mount** a dataset, you attach the files referenced by the dataset to a directory (mount point) and make it available on the compute target. Mounting is supported for Linux-based computes, including Azure Machine Learning Compute, virtual machines, and HDInsight. If your data size exceeds the compute disk size, downloading is not possible. For this scenario, we recommend mounting since only the data files used by your script are loaded at the time of processing.
-When you **download** a dataset, all the files referenced by the dataset will be downloaded to the compute target. Downloading is supported for all compute types.
+When you **download** a dataset, all the files referenced by the dataset will be downloaded to the compute target. Downloading is supported for all compute types. If your script processes all files referenced by the dataset, and your compute disk can fit your full dataset, downloading is recommended to avoid the overhead of streaming data from storage services. For multi-node downloads see [how to avoid throttling](#troubleshooting).
> [!NOTE] > The download path name should not be longer than 255 alpha-numeric characters for Windows OS. For Linux OS, the download path name should not be longer than 4,096 alpha-numeric characters. Also, for Linux OS the file name (which is the last segment of the download path `/path/to/file/{filename}`) should not be longer than 255 alpha-numeric characters.
-If your script processes all files referenced by the dataset, and your compute disk can fit your full dataset, downloading is recommended to avoid the overhead of streaming data from storage services. If your data size exceeds the compute disk size, downloading is not possible. For this scenario, we recommend mounting since only the data files used by your script are loaded at the time of processing.
- The following code mounts `dataset` to the temp directory at `mounted_path` - ```python import tempfile mounted_path = tempfile.mkdtemp()
src.run_config.source_directory_data_store = "workspaceblobstore"
* If you are using `azureml-sdk<1.12.0`, upgrade to the latest version. * If you have outbound NSG rules, make sure there is an outbound rule that allows all traffic for the service tag `AzureResourceMonitor`.
+**Dataset initialization failed: StreamAccessException was caused by ThrottlingException**
+
+For multi-node file downloads, all nodes may attempt to download all files in the file dataset from the Azure Storage service, which results in a throttling error. To avoid throttling, initially set the environment variable `AZUREML_DOWNLOAD_CONCURRENCY` to a value of 8 times the number of CPU cores divided by the number of nodes. Setting up a value for this environment variable may require some experimentation, so the aforementioned guidance is a starting point.
+
+The following example assumes 32 cores and 4 nodes.
+
+```python
+from azureml.core.environment import Environment
+myenv = Environment(name="myenv")
+myenv.environment_variables = {"AZUREML_DOWNLOAD_CONCURRENCY":64}
+```
+ ### AzureFile storage **Unable to upload project files to working directory in AzureFile because the storage is overloaded**:
machine-learning How To Troubleshoot Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-managed-online-endpoints.md
As a part of local deployment the following steps take place:
- Docker either builds a new container image or pulls an existing image from the local Docker cache. An existing image is used if there's one that matches the environment part of the specification file. - Docker starts a new container with mounted local artifacts such as model and code files.
-For more, see [Deploy locally in Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-using-local-endpoints).
+For more, see [Deploy locally in Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints).
## Get container logs
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
You can also create a managed online endpoint from the **Models** page in the st
1. Select a model by checking the circle next to the model name. 1. Select **Deploy** > **Deploy to endpoint (preview)**. - Follow the setup wizard to configure your managed online endpoint. + ## View managed online endpoints (preview) You can view your managed online endpoints (preview) in the **Endpoints** page. Use the endpoint details page to find critical information including the endpoint URI, status, testing tools, activity monitors, deployment logs, and sample consumption code:
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
Previously updated : 4/2/2021 Last updated : 07/08/2021 # Azure Machine Learning Curated Environments
Last updated 4/2/2021
This article lists the curated environments in Azure Machine Learning. Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They are backed by cached Docker images that use the latest version of the Azure Machine Learning SDK, reducing the run preparation cost and allowing for faster deployment time. Use these environments to quickly get started with various machine learning frameworks. > [!NOTE]
-> This list is updated as of April 2021. Use the [Python SDK](how-to-use-environments.md), [CLI](/cli/azure/ml/environment?view=azure-cli-latest&preserve-view=true#az_ml_environment_list), or Azure Machine Learning [studio](how-to-manage-environments-in-studio.md) to get the most updated list of environments and their dependencies. For more information, see the [environments article](how-to-use-environments.md#use-a-curated-environment). Following the release of this new set, previous curated environments will be hidden but can still be used.
-
+> This list is updated as of July 2021. Use the [Python SDK](how-to-use-environments.md), [CLI](/cli/azure/ml/environment?view=azure-cli-latest&preserve-view=true#az_ml_environment_list), or Azure Machine Learning [studio](how-to-manage-environments-in-studio.md) to get the most updated list of environments and their dependencies. For more information, see the [environments article](how-to-use-environments.md#use-a-curated-environment). Following the release of this new set, previous curated environments will be hidden but can still be used.
## PyTorch-- AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
- - An environment for deep learning with PyTorch containing the AzureML Python SDK and additional python packages.
- - The following Dockerfile can be customized for your personal workflows:
-
- ```dockerfile
- FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
-
- ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/pytorch-1.7
-
- # Create conda environment
- RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
- python=3.7 \
- pip=20.2.4 \
- pytorch=1.7.1 \
- torchvision=0.8.2 \
- torchaudio=0.7.2 \
- cudatoolkit=11.0 \
- nvidia-apex=0.1.0 \
- -c anaconda -c pytorch -c conda-forge
-
- # Prepend path to AzureML conda environment
- ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
-
- # Install pip dependencies
- RUN HOROVOD_WITH_PYTORCH=1 \
- pip install 'matplotlib>=3.3,<3.4' \
- 'psutil>=5.8,<5.9' \
- 'tqdm>=4.59,<4.60' \
- 'pandas>=1.1,<1.2' \
- 'scipy>=1.5,<1.6' \
- 'numpy>=1.10,<1.20' \
- 'azureml-core==1.30.0' \
- 'azureml-defaults==1.30.0' \
- 'azureml-mlflow==1.30.0' \
- 'azureml-telemetry==1.30.0' \
- 'tensorboard==2.4.0' \
- 'tensorflow-gpu==2.4.1' \
- 'onnxruntime-gpu>=1.7,<1.8' \
- 'horovod[pytorch]==0.21.3' \
- 'future==0.17.1'
-
- # This is needed for mpi to locate libpython
- ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
- ```
+
+**Name** - AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
+**Description** - An environment for deep learning with PyTorch containing the AzureML Python SDK and additional python packages.
+**Dockerfile configuration** - The following Dockerfile can be customized for your personal workflows:
+
+```dockerfile
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
+
+ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/pytorch-1.7
+
+# Create conda environment
+RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 \
+ pip=20.2.4 \
+ pytorch=1.7.1 \
+ torchvision=0.8.2 \
+ torchaudio=0.7.2 \
+ cudatoolkit=11.0 \
+ nvidia-apex=0.1.0 \
+ -c anaconda -c pytorch -c conda-forge
+
+# Prepend path to AzureML conda environment
+ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+# Install pip dependencies
+RUN HOROVOD_WITH_PYTORCH=1 \
+ pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'scipy>=1.5,<1.6' \
+ 'numpy>=1.10,<1.20' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0' \
+ 'tensorboard==2.4.0' \
+ 'tensorflow-gpu==2.4.1' \
+ 'onnxruntime-gpu>=1.7,<1.8' \
+ 'horovod[pytorch]==0.21.3' \
+ 'future==0.17.1'
+
+# This is needed for mpi to locate libpython
+ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+```
## LightGBM-- AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
- - An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and additional packages.
- - The following Dockerfile can be customized for your personal workflows:
-
- ```dockerfile
- FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1
-
- ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/lightgbm
-
- # Create conda environment
- RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
- python=3.7 pip=20.2.4
-
- # Prepend path to AzureML conda environment
- ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
-
- # Install pip dependencies
- RUN HOROVOD_WITH_TENSORFLOW=1 \
- pip install 'matplotlib>=3.3,<3.4' \
- 'psutil>=5.8,<5.9' \
- 'tqdm>=4.59,<4.60' \
- 'pandas>=1.1,<1.2' \
- 'numpy>=1.10,<1.20' \
- 'scipy~=1.5.0' \
- 'scikit-learn~=0.24.1' \
- 'xgboost~=1.4.0' \
- 'lightgbm~=3.2.0' \
- 'dask~=2021.6.0' \
- 'distributed~=2021.6.0' \
- 'dask-ml~=1.9.0' \
- 'adlfs~=0.7.0' \
- 'azureml-core==1.30.0' \
- 'azureml-defaults==1.30.0' \
- 'azureml-mlflow==1.30.0' \
- 'azureml-telemetry==1.30.0'
-
- # This is needed for mpi to locate libpython
- ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
- ```
+
+**Name** - AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
+**Description** - An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and additional packages.
+**Dockerfile configuration** - The following Dockerfile can be customized for your personal workflows:
+
+```dockerfile
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210615.v1
+
+ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/lightgbm
+
+# Create conda environment
+RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 pip=20.2.4
+
+# Prepend path to AzureML conda environment
+ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+# Install pip dependencies
+RUN HOROVOD_WITH_TENSORFLOW=1 \
+ pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'numpy>=1.10,<1.20' \
+ 'scipy~=1.5.0' \
+ 'scikit-learn~=0.24.1' \
+ 'xgboost~=1.4.0' \
+ 'lightgbm~=3.2.0' \
+ 'dask~=2021.6.0' \
+ 'distributed~=2021.6.0' \
+ 'dask-ml~=1.9.0' \
+ 'adlfs~=0.7.0' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0'
+
+# This is needed for mpi to locate libpython
+ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+```
## Sklearn-- AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
- - An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and additional python packages.
- - The following Dockerfile can be customized for your personal workflows:
-
- ```dockerfile
- FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
-
- ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/sklearn-0.24.1
-
- # Create conda environment
- RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
- python=3.7 pip=20.2.4
-
- # Prepend path to AzureML conda environment
- ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
-
- # Install pip dependencies
- RUN pip install 'matplotlib>=3.3,<3.4' \
- 'psutil>=5.8,<5.9' \
- 'tqdm>=4.59,<4.60' \
- 'pandas>=1.1,<1.2' \
- 'scipy>=1.5,<1.6' \
- 'numpy>=1.10,<1.20' \
- 'azureml-core==1.30.0' \
- 'azureml-defaults==1.30.0' \
- 'azureml-mlflow==1.30.0' \
- 'azureml-telemetry==1.30.0' \
- 'scikit-learn==0.24.1'
-
- # This is needed for mpi to locate libpython
- ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
- ```
+**Name** - AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
+**Description** - An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and additional python packages.
+**Dockerfile configuration** - The following Dockerfile can be customized for your personal workflows:
+
+```dockerfile
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
+
+ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/sklearn-0.24.1
+
+# Create conda environment
+RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 pip=20.2.4
+
+# Prepend path to AzureML conda environment
+ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+# Install pip dependencies
+RUN pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'scipy>=1.5,<1.6' \
+ 'numpy>=1.10,<1.20' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0' \
+ 'scikit-learn==0.24.1'
+
+# This is needed for mpi to locate libpython
+ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+```
## TensorFlow-- AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
- - An environment for deep learning with Tensorflow containing the AzureML Python SDK and additional python packages.
- - The following Dockerfile can be customized for your personal workflows:
-
- ```dockerfile
- FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
-
- ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.4
-
- # Create conda environment
- RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
- python=3.7 pip=20.2.4
-
- # Prepend path to AzureML conda environment
- ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
-
- # Install pip dependencies
- RUN HOROVOD_WITH_TENSORFLOW=1 \
- pip install 'matplotlib>=3.3,<3.4' \
- 'psutil>=5.8,<5.9' \
- 'tqdm>=4.59,<4.60' \
- 'pandas>=1.1,<1.2' \
- 'scipy>=1.5,<1.6' \
- 'numpy>=1.10,<1.20' \
- 'azureml-core==1.30.0' \
- 'azureml-defaults==1.30.0' \
- 'azureml-mlflow==1.30.0' \
- 'azureml-telemetry==1.30.0' \
- 'tensorboard==2.4.0' \
- 'tensorflow-gpu==2.4.1' \
- 'onnxruntime-gpu>=1.7,<1.8' \
- 'horovod[tensorflow-gpu]==0.21.3'
-
- # This is needed for mpi to locate libpython
- ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
- ```
+
+**Name** - AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
+**Description** - An environment for deep learning with Tensorflow containing the AzureML Python SDK and additional python packages.
+**Dockerfile configuration** - The following Dockerfile can be customized for your personal workflows:
+
+```dockerfile
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20210615.v1
+
+ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.4
+
+# Create conda environment
+RUN conda create -p $AZUREML_CONDA_ENVIRONMENT_PATH \
+ python=3.7 pip=20.2.4
+
+# Prepend path to AzureML conda environment
+ENV PATH $AZUREML_CONDA_ENVIRONMENT_PATH/bin:$PATH
+
+# Install pip dependencies
+RUN HOROVOD_WITH_TENSORFLOW=1 \
+ pip install 'matplotlib>=3.3,<3.4' \
+ 'psutil>=5.8,<5.9' \
+ 'tqdm>=4.59,<4.60' \
+ 'pandas>=1.1,<1.2' \
+ 'scipy>=1.5,<1.6' \
+ 'numpy>=1.10,<1.20' \
+ 'azureml-core==1.30.0' \
+ 'azureml-defaults==1.30.0' \
+ 'azureml-mlflow==1.30.0' \
+ 'azureml-telemetry==1.30.0' \
+ 'tensorboard==2.4.0' \
+ 'tensorflow-gpu==2.4.1' \
+ 'onnxruntime-gpu>=1.7,<1.8' \
+ 'horovod[tensorflow-gpu]==0.21.3'
+
+# This is needed for mpi to locate libpython
+ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
+```
+
+## Automated ML (AutoML)
+
+Azure ML pipeline training workflows that use AutoML automatically selects a curated environment based on the compute type and whether DNN is enabled. AutoML provides the following curated environments:
+
+| Name | Compute Type | DNN enabled |
+| | | |
+|AzureML-AutoML | CPU | No |
+|AzureML-AutoML-DNN | CPU | Yes |
+| AzureML-AutoML-GPU | GPU | No |
+| AzureML-AutoML-DNN-GPU | GPU | Yes |
+
+For more information on AutoML and Azure ML pipelines, see [use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
## Inference only curated environments and prebuilt docker images-- Read about inference only curated environments and MCR path for prebuilt docker images, see [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md#list-of-prebuilt-docker-images-for-inference).+
+Read about inference only curated environments and MCR path for prebuilt docker images, see [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md#list-of-prebuilt-docker-images-for-inference).
marketplace Analytics Make Your First Api Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics-make-your-first-api-call.md
Curl
## Next steps -- You can try out the APIs through the [Swagger API URL](https://swagger.io/docs/specification/api-host-and-base-path/)
+- You can try out the APIs through the [Swagger API URL](https://api.partnercenter.microsoft.com/insights/v1/cmp/swagger/https://docsupdatetracker.net/index.html)
- [Programmatic access paradigm](analytics-programmatic-access.md)
marketplace Private Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/private-offers.md
Last updated 02/22/2021
# Private offers in the Microsoft commercial marketplace
-Private offers, also called private plans enable publishers to create plans that are only visible to targeted customers. This article discusses the options and benefits of private offers.
+Private offers, also called private plans, enable publishers to create plans that are only visible to targeted customers. This article discusses the options and benefits of private offers.
## Unlock enterprise deals with private offers
Once an offer has been certified and published, customers can be updated or remo
Once signed into the Azure portal, customers can follow these steps to select your private offers.
-1. Login to [Azure portal](https://ms.portal.azure.com/).
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
1. Under **Azure services**, select **Create a resource**. 1. On the **New** page, next to **Azure Marketplace**, select **See all**. The Marketplace page appears. 1. In the left navigation, select **Private Offers**.
media-services Account Move Account How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/account-move-account-how-to.md
To start managing, encrypting, encoding, analyzing, and streaming media content
[!INCLUDE [account creation note](./includes/note-2020-05-01-account-creation.md)]
-## Moving a Media Services account between subscriptions
+## Media Services account names
-If you need to move a Media Services account to a new subscription, you need to first move the entire resource group that contains the Media Services account to the new subscription. You must move all attached resources: Azure Storage accounts, Azure CDN profiles, etc. For more information, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). As with any resources in Azure, resource group moves can take some time to complete.
+Media Services account names must be all lowercase letters or numbers with no spaces, and between 3 to 24 characters in length. Media Services account names must be unique within an Azure location.
+
+When a Media Services account is deleted, the account name is reserved for one year. For a year after the account is deleted, account name may only be reused in the same Azure location by the
+subscription that contained the original account.
+
+Media Services account names are used in DNS names, including for Key Delivery, Live Events and Streaming Endpoint names. If you have configured firewalls or proxies to allow Media Services
+DNS names, ensure these configurations are removed within a year of deleting a Media Services account.
-> [!NOTE]
-> Media Services v3 supports multi-tenancy model.
+## Moving a Media Services account between subscriptions
+
+If you need to move a Media Services account to a new subscription, you need to first move the entire resource group that contains the Media Services account to the new subscription. You must move all attached resources: Azure Storage accounts, Azure CDN profiles, etc. For more information, see [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). As with any resources in Azure, resource group moves can take some time to complete.
### Considerations
migrate Migrate V1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-v1.md
After you configure a workspace, you download and install agents on each on-prem
4. Copy the workspace ID and key. You need these when you install the MMA on the on-premises machine. > [!NOTE]
-> To automate the installation of agents you can use a deployment tool such as Configuration Manager or a partner tool such a, [Intigua](https://www.intigua.com/intigua-for-azure-migration), that provides an agent deployment solution for Azure Migrate.
+> To automate the installation of agents you can use a deployment tool such as Configuration Manager or a partner tool such as, Intigua, that provides an agent deployment solution for Azure Migrate.
#### Install the MMA agent on a Windows machine
VMConnection
## Next steps
-[Learn about](migrate-services-overview.md) the latest version of Azure Migrate.
+[Learn about](migrate-services-overview.md) the latest version of Azure Migrate.
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/prepare-for-agentless-migration.md
The preparation script executes the following changes based on the OS type of th
1. **Enable Azure Serial Console logging**
- The script will then make changes to enable Azure Serial Console logging. Enabling console logging helps with troubleshooting issues on the Azure VM. Learn more about Azure Serial Console for Linux [Azure Serial Console for Linux - Virtual Machines | Microsoft Docs](/azure/virtual-machines/serial-console-linux).
+ The script will then make changes to enable Azure Serial Console logging. Enabling console logging helps with troubleshooting issues on the Azure VM. Learn more about Azure Serial Console for Linux [Azure Serial Console for Linux - Virtual Machines | Microsoft Docs](/troubleshoot/azure/virtual-machines/serial-console-linux).
Modify the kernel boot line in GRUB or GRUB2 to include the following parameters, so that all console messages are sent to the first serial port. These messages can assist Azure support with debugging any issues.
After this, the modified OS disk and the data disks that contain the replicated
## Learn more -- [Prepare on-premises machines for migration to Azure.](./prepare-for-migration.md)
+- [Prepare on-premises machines for migration to Azure.](./prepare-for-migration.md)
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-data-in-replication.md
Configuring Data-in replication for zone redundant high availability servers is
### Filtering
-To skip replicating tables from your source server (hosted on-premises, in virtual machines, or a database service hosted by other cloud providers), the `replicate_wild_ignore_table` parameter is supported. Optionally, update this parameter on the replica server hosted in Azure using the [Azure portal](how-to-configure-server-parameters-portal.md) or [Azure CLI](how-to-configure-server-parameters-cli.md).
-
-To learn more about this parameter, review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table).
+Modifying the parameter `replicate_wild_ignore_table` which was used to create replication filter for tables, is currently not supported for Azure Database for MySQL -Flexible server.
### Requirements
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-data-in-replication.md
The following steps prepare and configure the MySQL server hosted on-premises, i
```sql CALL mysql.az_replication_change_master('master.companya.com', 'syncuser', 'P@ssword!', 3306, 'mysql-bin.000002', 120, ''); ```-
-2. Set up filtering.
-
- If you want to skip replicating some tables from your master, update the `replicate_wild_ignore_table` server parameter on your replica server. You can provide more than one table pattern using a comma-separated list.
-
- Review the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#option_mysqld_replicate-wild-ignore-table) to learn more about this parameter.
-
- To update the parameter, you can use the [Azure portal](how-to-configure-server-parameters-portal.md) or [Azure CLI](how-to-configure-server-parameters-cli.md).
-
-3. Start replication.
+2. Start replication.
Call the `mysql.az_replication_start` stored procedure to start replication.
The following steps prepare and configure the MySQL server hosted on-premises, i
CALL mysql.az_replication_start; ```
-4. Check replication status.
+3. Check replication status.
Call the [`show slave status`](https://dev.mysql.com/doc/refman/5.7/en/show-slave-status.html) command on the replica server to view the replication status.
network-watcher Connection Monitor Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-schema.md
Here are some benefits of Connection Monitor:
* Support for connectivity checks that are based on HTTP, TCP, and ICMP * Metrics and Log Analytics support for both Azure and non-Azure test setups
+There are two types of logs / data ingested into Log Analytics.
+The Test data(NWConnectionMonitorTestResult query) is updated based on monitoring frequency of a particular test group.
+The Path data(NWConnectionMonitorPathResult query) is updated when there is significant change in loss percentage or round trip time.
+Hence for some time duration test data may keep getting updated while path data is not frequently updated, as both are independent.
+ ## Connection Monitor Tests schema
-Listed below are the fields in the Connection Monitor Tests schema and what they signify
+Listed below are the fields in the Connection Monitor Tests data schema and what they signify
| Field | Description | |||
Listed below are the fields in the Connection Monitor Tests schema and what they
## Connection Monitor Path schema
-Listed below are the fields in the Connection Monitor Path schema and what they signify
+Listed below are the fields in the Connection Monitor Path data schema and what they signify
| Field | Description | |||
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net | azurewebsites.net | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net |
-| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net |
+| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/>privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net<br/> blob.core.windows.net |
| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com | cognitiveservices.azure.com | | Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | privatelink.afs.azure.net | afs.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net |
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
To create and run a new scan, do the following:
Usage of NOT and special characters are not acceptable. 1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Hive Metastore database to be scanned.
-
- > [!Note]
- > **For scanning Databricks metastore**
- >
:::image type="content" source="media/register-scan-hive-metastore-source/scan.png" alt-text="scan hive source" border="true":::
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-what-is-azure-search.md
Previously updated : 05/26/2021 Last updated : 07/21/2021 # What is Azure Cognitive Search?
-Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers an architecture, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
+Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
-Search is foundational to any app that surfaces content to users, with common scenarios including catalog or document search, e-commerce site search, or knowledge mining for data science.
+Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, e-commerce app search, or knowledge mining for data science.
When you create a search service, you'll work with the following capabilities:
-+ A search engine for full text search
-+ Persistent storage of user-owned content in a search index
-+ Rich indexing, with text analysis and optional [AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation
++ A search engine for full text search with storage for user-owned content in a search index++ Rich indexing, with text analysis and [optional AI enrichment](cognitive-search-concept-intro.md) for advanced content extraction and transformation + Rich query capabilities, including simple syntax, full Lucene syntax, and typeahead search + Programmability through REST APIs and client libraries in Azure SDKs for .NET, Python, Java, and JavaScript + Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
-+ State-of-art ranking algorithms through [semantic search (preview)](semantic-search-overview.md)
-Architecturally, a search service sits in between the external data stores that contain your un-indexed data, and your client app that sends query requests to a search index and handles the response.
+Architecturally, a search service sits between the external data stores that contain your un-indexed data, and your client app that sends query requests to a search index and handles the response.
![Azure Cognitive Search architecture](media/search-what-is-azure-search/azure-search-diagram.svg "Azure Cognitive Search architecture")
-Externally, search can integrate with other Azure services in the form of *indexers* that automate data ingestion/retrieval from Azure data sources, and *skillsets* that incorporate consumable AI from Cognitive Services, such as image and text analysis, or custom AI that you create in Azure Machine Learning or wrap inside Azure Functions.
+Cognitive Search can integrate with other Azure services in the form of *indexers* that automate data ingestion/retrieval from Azure data sources, and *skillsets* that incorporate consumable AI from Cognitive Services, such as image and text analysis, or custom AI that you create in Azure Machine Learning or wrap inside Azure Functions.
## Inside a search service
On the search service itself, the two primary workloads are *indexing* and *quer
Functionality is exposed through a simple [REST API](/rest/api/searchservice/) or [.NET SDK](search-howto-dotnet-sdk.md) that masks the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
-## Why use Cognitive Search
+## Why use Cognitive Search?
Azure Cognitive Search is well suited for the following application scenarios:
-+ Consolidate heterogeneous content into a private, user-defined search index.
++ Consolidate heterogeneous content into a private, user-defined search index. Offload indexing and query workloads onto a dedicated search service. + Easily implement search-related features: relevance tuning, faceted navigation, filters (including geo-spatial search), synonym mapping, and autocomplete.
Among our customers, those able to leverage the widest range of features in Azur
## Watch this video
-In this 15-minute video, program manager Luis Cabrera introduces Azure Cognitive Search.
+In this 15-minute video, review the main capabilities of Azure Cognitive Search.
>[!VIDEO https://www.youtube.com/embed/kOJU0YZodVk?version=3]
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
Learn what's new in the service. Bookmark this page to keep up to date with the
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability | ||--||
-| [Role-based authorization (preview)](search-security-rbac.md) | Use new built-in roles to control access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview, using Azure portal or the Management REST API version 2021-04-01-Preview, and Search REST API version 2021-04-30-Preview.|
+| [Role-based authorization (preview)](search-security-rbac.md) | Authenticate using Azure Active Directory and use new built-in roles to control access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview, using Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication.|
| [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/) | Modifies [Create or Update](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) service operations to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview | ## May 2021
security-center Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-introduction.md
Title: Azure Defender for Kubernetes - the benefits and features
description: Learn about the benefits and features of Azure Defender for Kubernetes. Previously updated : 04/07/2021 Last updated : 07/20/2021
We can defend clusters in:
Azure Security Center and AKS form a cloud-native Kubernetes security offering with environment hardening, workload protection, and run-time protection as outlined in [Container security in Security Center](container-security.md).
-Host-level threat detection for your Linux AKS nodes is available if you enable [Azure Defender for servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on a virtual machine scale set, the Log Analytics agent is not currently supported.
+Host-level threat detection for your Linux AKS nodes is available if you enable [Azure Defender for servers](defender-for-servers-introduction.md) and its Log Analytics agent. However, if your cluster is deployed on an Azure Kubernetes Service virtual machine scale set (VMSS), the Log Analytics agent is not currently supported.
Also, our global team of security researchers constantly monitor the threat land
## FAQ - Azure Defender for Kubernetes
+- [Can I still get cluster protections without the Log Analytics agent?](#can-i-still-get-cluster-protections-without-the-log-analytics-agent)
+- [Does AKS allow me to install custom VM extensions on my AKS nodes?](#does-aks-allow-me-to-install-custom-vm-extensions-on-my-aks-nodes)
+- [If my cluster is already running an Azure Monitor for containers agent, do I need the Log Analytics agent too?](#if-my-cluster-is-already-running-an-azure-monitor-for-containers-agent-do-i-need-the-log-analytics-agent-too)
+- [Does Azure Defender for Kubernetes support AKS with VMSS nodes?](#does-azure-defender-for-kubernetes-support-aks-with-vmss-nodes)
+ ### Can I still get cluster protections without the Log Analytics agent? **Azure Defender for Kubernetes** plan provides protections at the cluster level. If you also deploy the Log Analytics agent of **Azure Defender for servers**, you'll get the threat protection for your nodes that's provided with that plan. Learn more in [Introduction to Azure Defender for servers](defender-for-servers-introduction.md).
If your clusters are already running the Azure Monitor for containers agent, you
[Learn more about the Azure Monitor for containers agent](../azure-monitor/containers/container-insights-manage-agent.md).
+### Does Azure Defender for Kubernetes support AKS with VMSS nodes?
+If your cluster is deployed on an Azure Kubernetes Service virtual machine scale set (VMSS), the Log Analytics agent is not currently supported.
+++ ## Next steps In this article, you learned about Security Center's Kubernetes protection including Azure Defender for Kubernetes.
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/end-to-end.md
The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a
| [Azure Front Door](../../frontdoor/front-door-overview.md) | A global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. | | [Azure Firewall](../../firewall/overview.md) | A managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. | | [Azure Key Vault](../../key-vault/general/overview.md) | A secure secrets store for tokens, passwords, certificates, API keys, and other secrets. Key Vault can also be used to create and control the encryption keys used to encrypt your data. |
-| [Key Vault Managed HSM (preview)](../../key-vault/managed-hsm/overview.md) | A fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. |
+| [Key Vault Managed HSM](../../key-vault/managed-hsm/overview.md) | A fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. |
| [Azure Private Link](../../private-link/private-link-overview.md) | Enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network. | | [Azure Application Gateway](../../application-gateway/overview.md) | An advanced web traffic load balancer that enables you to manage traffic to your web applications. Application Gateway can make routing decisions based on additional attributes of an HTTP request, for example URI path or host headers. | | [Azure Service Bus](../../service-bus-messaging/service-bus-messaging-overview.md) | A fully managed enterprise message broker with message queues and publish-subscribe topics. Service Bus is used to decouple applications and services from each other. |
sentinel Understand Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/understand-threat-intelligence.md
Tagging threat indicators is an easy way to group them together to make them eas
:::image type="content" source="media/understand-threat-intelligence/threat-intel-tagging-indicators.png" alt-text="Apply tags to threat indicators" lightbox="media/understand-threat-intelligence/threat-intel-tagging-indicators.png":::
+For more details on viewing and managing your threat indicators, see [Work with threat indicators in Azure Sentinel](work-with-threat-indicators.md#view-your-threat-indicators-in-azure-sentinel).
+ ## Detect threats with threat indicator-based analytics The most important use case for threat indicators in SIEM solutions like Azure Sentinel is to power analytics rules for threat detection. These indicator-based rules compare raw events from your data sources against your threat indicators to detect security threats in your organization. In Azure Sentinel **Analytics**, you create analytics rules that run on a schedule and generate security alerts. The rules are driven by queries, along with configurations that determine how often the rule should run, what kind of query results should generate security alerts and incidents, and which if any automations to trigger in response.
According to the default settings, each time the rule runs on its schedule, any
In Azure Sentinel, the alerts generated from analytics rules also generate security incidents which can be found in **Incidents** under **Threat Management** on the Azure Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Azure Sentinel](./tutorial-investigate-cases.md).
+For more details on using threat indicators in your analytics rules, see [Work with threat indicators in Azure Sentinel](work-with-threat-indicators.md#detect-threats-with-threat-indicator-based-analytics).
+ ## Workbooks provide insights about your threat intelligence Workbooks provide powerful interactive dashboards that give you insights into all aspects of Azure Sentinel, and threat intelligence is no exception. You can use the built-in **Threat Intelligence workbook** to visualize key information about your threat intelligence, and you can easily customize the workbook according to your business needs. You can even create new dashboards combining many different data sources so you can visualize your data in unique ways. Since Azure Sentinel workbooks are based on Azure Monitor workbooks, there is already extensive documentation available, and many more templates. A great place to start is this article on how to [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md). There is also a rich community of [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks) where you can download additional templates and contribute your own templates.
+For more details on using and customizing the Threat Intelligence workbook, see [Work with threat indicators in Azure Sentinel](work-with-threat-indicators.md#workbooks-provide-insights-about-your-threat-intelligence).
+ ## Next steps In this document, you learned about the threat intelligence capabilities of Azure Sentinel, including the Threat Intelligence blade. For practical guidance on using Azure Sentinel's threat intelligence capabilities, see the following articles: - Connect Azure Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md). - [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Azure Sentinel. - See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Azure Sentinel.
+- [Work with threat indicators](work-with-threat-indicators.md) throughout the Azure Sentinel experience.
- Detect threats with [built-in](./tutorial-detect-threats-built-in.md) or [custom](./tutorial-detect-threats-custom.md) analytics rules in Azure Sentinel - [Investigate incidents](./tutorial-investigate-cases.md) in Azure Sentinel.
site-recovery Quickstart Create Vault Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/quickstart-create-vault-template.md
The template used in this quickstart is from
Two Azure resources are defined in the template: - [Microsoft.RecoveryServices vaults](/azure/templates/microsoft.recoveryservices/vaults): creates the vault.-- [Microsoft.RecoveryServices/vaults/backupstorageconfig](/rest/api/backup/2018-12-20/backup-resource-storage-configs): configures the vault's backup redundancy settings.
+- [Microsoft.RecoveryServices/vaults/backupstorageconfig](/rest/api/backup/backup-resource-storage-configs): configures the vault's backup redundancy settings.
The template includes optional parameters for the vault's backup configuration. The storage redundancy settings are locally-redundant storage (LRS) or geo-redundant storage (GRS). For more
site-recovery Site Recovery Iis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-iis.md
For more information, see [Customize the recovery plan](site-recovery-runbook-au
For the IIS web farm to function correctly, you might need to do some operations on the Azure virtual machines post-failover or during a test failover. You can automate some post-failover operations. For example, you can update the DNS entry, change a site binding, or change a connection string by adding corresponding scripts to the recovery plan. [Add a VMM script to a recovery plan](./hyper-v-vmm-recovery-script.md) describes how to set up automated tasks by using a script. #### DNS update
-If DNS is configured for dynamic DNS update, virtual machines usually update the DNS with the new IP address when they start. If you want to add an explicit step to update DNS with the new IP addresses of the virtual machines, add a [script to update IP in DNS](https://aka.ms/asr-dns-update) as a post-failover action on recovery plan groups.
+If DNS is configured for dynamic DNS update, virtual machines usually update the DNS with the new IP address when they start. If you want to add an explicit step to update DNS with the new IP addresses of the virtual machines, add a [script to update IP in DNS](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/demos/asr-automation-recovery/scripts/ASR-DNS-UpdateIP.ps1) as a post-failover action on recovery plan groups.
#### Connection string in an applicationΓÇÖs web.config The connection string specifies the database that the website communicates with. If the connection string carries the name of the database virtual machine, no further steps are needed post-failover. The application can automatically communicate with the database. Also, if the IP address for the database virtual machine is retained, it doesn't be need to update the connection string.
For more information, see [Test failover to Azure in Site Recovery](site-recover
For more information, see [Failover in Site Recovery](site-recovery-failover.md). ## Next steps
-* Learn more about [replicating other applications](site-recovery-workload.md) by using Site Recovery.
+* Learn more about [replicating other applications](site-recovery-workload.md) by using Site Recovery.
spring-cloud Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/access-app-virtual-network.md
Title: "Azure Spring Cloud access app in virtual network" description: Access app in an Azure Spring Cloud in virtual network.--++ Last updated 11/11/2020
spring-cloud Concept App Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concept-app-status.md
Title: App status in Azure Spring Cloud description: Learn the app status categories in Azure Spring Cloud-+ Last updated 04/10/2020-+
spring-cloud Concept Manage Monitor App Spring Boot Actuator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concept-manage-monitor-app-spring-boot-actuator.md
Title: "Manage and monitor app with Azure Spring Boot Actuator" description: Learn how to manage and monitor app with Spring Boot Actuator.--++ Last updated 05/20/2020
spring-cloud Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concept-metrics.md
Title: Metrics for Azure Spring Cloud description: Learn how to review metrics in Azure Spring Cloud-+ Last updated 09/08/2020-+
spring-cloud Concept Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/concept-security-controls.md
Title: Security controls for Azure Spring Cloud Service description: Use security controls built in into Azure Spring Cloud Service.--++ Last updated 04/23/2020